00:00:00.001 Started by upstream project "autotest-per-patch" build number 132846 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.067 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.067 The recommended git tool is: git 00:00:00.067 using credential 00000000-0000-0000-0000-000000000002 00:00:00.069 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.104 Fetching changes from the remote Git repository 00:00:00.106 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.163 Using shallow fetch with depth 1 00:00:00.163 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.163 > git --version # timeout=10 00:00:00.213 > git --version # 'git version 2.39.2' 00:00:00.213 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.240 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.240 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.566 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.578 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.590 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.590 > git config core.sparsecheckout # timeout=10 00:00:05.602 > git read-tree -mu HEAD # timeout=10 00:00:05.618 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.638 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.638 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.733 [Pipeline] Start of Pipeline 00:00:05.744 [Pipeline] library 00:00:05.746 Loading library shm_lib@master 00:00:05.746 Library shm_lib@master is cached. Copying from home. 00:00:05.758 [Pipeline] node 00:00:05.769 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:05.770 [Pipeline] { 00:00:05.778 [Pipeline] catchError 00:00:05.779 [Pipeline] { 00:00:05.791 [Pipeline] wrap 00:00:05.799 [Pipeline] { 00:00:05.808 [Pipeline] stage 00:00:05.810 [Pipeline] { (Prologue) 00:00:05.826 [Pipeline] echo 00:00:05.828 Node: VM-host-SM0 00:00:05.834 [Pipeline] cleanWs 00:00:05.844 [WS-CLEANUP] Deleting project workspace... 00:00:05.844 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.850 [WS-CLEANUP] done 00:00:06.043 [Pipeline] setCustomBuildProperty 00:00:06.134 [Pipeline] httpRequest 00:00:07.174 [Pipeline] echo 00:00:07.175 Sorcerer 10.211.164.20 is alive 00:00:07.183 [Pipeline] retry 00:00:07.185 [Pipeline] { 00:00:07.196 [Pipeline] httpRequest 00:00:07.200 HttpMethod: GET 00:00:07.201 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.201 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.213 Response Code: HTTP/1.1 200 OK 00:00:07.213 Success: Status code 200 is in the accepted range: 200,404 00:00:07.214 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.487 [Pipeline] } 00:00:10.505 [Pipeline] // retry 00:00:10.512 [Pipeline] sh 00:00:10.794 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.810 [Pipeline] httpRequest 00:00:11.191 [Pipeline] echo 00:00:11.193 Sorcerer 10.211.164.20 is alive 00:00:11.203 [Pipeline] retry 00:00:11.205 [Pipeline] { 00:00:11.220 [Pipeline] httpRequest 00:00:11.224 HttpMethod: GET 00:00:11.225 URL: http://10.211.164.20/packages/spdk_4dfeb7f956ca2ea417b1882cf0e8ac23c1da93fd.tar.gz 00:00:11.226 Sending request to url: http://10.211.164.20/packages/spdk_4dfeb7f956ca2ea417b1882cf0e8ac23c1da93fd.tar.gz 00:00:11.244 Response Code: HTTP/1.1 200 OK 00:00:11.245 Success: Status code 200 is in the accepted range: 200,404 00:00:11.246 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_4dfeb7f956ca2ea417b1882cf0e8ac23c1da93fd.tar.gz 00:01:14.640 [Pipeline] } 00:01:14.657 [Pipeline] // retry 00:01:14.665 [Pipeline] sh 00:01:15.019 + tar --no-same-owner -xf spdk_4dfeb7f956ca2ea417b1882cf0e8ac23c1da93fd.tar.gz 00:01:18.353 [Pipeline] sh 00:01:18.632 + git -C spdk log --oneline -n5 00:01:18.632 4dfeb7f95 mk/spdk.common.mk Use pattern substitution instead of prefix removal 00:01:18.632 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:18.632 66289a6db build: use VERSION file for storing version 00:01:18.632 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:18.632 cec5ba284 nvme/rdma: Register UMR per IO request 00:01:18.648 [Pipeline] writeFile 00:01:18.662 [Pipeline] sh 00:01:18.942 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:18.952 [Pipeline] sh 00:01:19.229 + cat autorun-spdk.conf 00:01:19.229 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.229 SPDK_TEST_NVMF=1 00:01:19.229 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.229 SPDK_TEST_URING=1 00:01:19.229 SPDK_TEST_USDT=1 00:01:19.229 SPDK_RUN_UBSAN=1 00:01:19.229 NET_TYPE=virt 00:01:19.229 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.236 RUN_NIGHTLY=0 00:01:19.238 [Pipeline] } 00:01:19.252 [Pipeline] // stage 00:01:19.266 [Pipeline] stage 00:01:19.268 [Pipeline] { (Run VM) 00:01:19.281 [Pipeline] sh 00:01:19.562 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:19.562 + echo 'Start stage prepare_nvme.sh' 00:01:19.562 Start stage prepare_nvme.sh 00:01:19.562 + [[ -n 7 ]] 00:01:19.562 + disk_prefix=ex7 00:01:19.562 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:19.562 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:19.562 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:19.562 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.562 ++ SPDK_TEST_NVMF=1 00:01:19.562 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.562 ++ SPDK_TEST_URING=1 00:01:19.562 ++ SPDK_TEST_USDT=1 00:01:19.562 ++ SPDK_RUN_UBSAN=1 00:01:19.562 ++ NET_TYPE=virt 00:01:19.562 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.562 ++ RUN_NIGHTLY=0 00:01:19.562 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:19.562 + nvme_files=() 00:01:19.562 + declare -A nvme_files 00:01:19.562 + backend_dir=/var/lib/libvirt/images/backends 00:01:19.562 + nvme_files['nvme.img']=5G 00:01:19.562 + nvme_files['nvme-cmb.img']=5G 00:01:19.562 + nvme_files['nvme-multi0.img']=4G 00:01:19.562 + nvme_files['nvme-multi1.img']=4G 00:01:19.562 + nvme_files['nvme-multi2.img']=4G 00:01:19.562 + nvme_files['nvme-openstack.img']=8G 00:01:19.562 + nvme_files['nvme-zns.img']=5G 00:01:19.562 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:19.562 + (( SPDK_TEST_FTL == 1 )) 00:01:19.562 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:19.562 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:19.562 + for nvme in "${!nvme_files[@]}" 00:01:19.562 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:19.562 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:19.562 + for nvme in "${!nvme_files[@]}" 00:01:19.562 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:19.562 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:19.562 + for nvme in "${!nvme_files[@]}" 00:01:19.562 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:19.562 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:19.562 + for nvme in "${!nvme_files[@]}" 00:01:19.562 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:19.562 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:19.562 + for nvme in "${!nvme_files[@]}" 00:01:19.562 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:19.562 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:19.562 + for nvme in "${!nvme_files[@]}" 00:01:19.562 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:19.562 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:19.562 + for nvme in "${!nvme_files[@]}" 00:01:19.562 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:19.821 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:19.821 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:19.821 + echo 'End stage prepare_nvme.sh' 00:01:19.821 End stage prepare_nvme.sh 00:01:19.832 [Pipeline] sh 00:01:20.170 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:20.170 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:01:20.170 00:01:20.170 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:20.170 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:20.170 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:20.170 HELP=0 00:01:20.170 DRY_RUN=0 00:01:20.170 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:01:20.170 NVME_DISKS_TYPE=nvme,nvme, 00:01:20.170 NVME_AUTO_CREATE=0 00:01:20.170 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:01:20.170 NVME_CMB=,, 00:01:20.170 NVME_PMR=,, 00:01:20.170 NVME_ZNS=,, 00:01:20.170 NVME_MS=,, 00:01:20.170 NVME_FDP=,, 00:01:20.170 SPDK_VAGRANT_DISTRO=fedora39 00:01:20.170 SPDK_VAGRANT_VMCPU=10 00:01:20.170 SPDK_VAGRANT_VMRAM=12288 00:01:20.170 SPDK_VAGRANT_PROVIDER=libvirt 00:01:20.170 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:20.170 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:20.170 SPDK_OPENSTACK_NETWORK=0 00:01:20.170 VAGRANT_PACKAGE_BOX=0 00:01:20.170 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:20.170 FORCE_DISTRO=true 00:01:20.170 VAGRANT_BOX_VERSION= 00:01:20.170 EXTRA_VAGRANTFILES= 00:01:20.170 NIC_MODEL=e1000 00:01:20.170 00:01:20.170 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:20.170 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:23.456 Bringing machine 'default' up with 'libvirt' provider... 00:01:24.022 ==> default: Creating image (snapshot of base box volume). 00:01:24.022 ==> default: Creating domain with the following settings... 00:01:24.022 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733924536_1c35277c521b9025a92c 00:01:24.022 ==> default: -- Domain type: kvm 00:01:24.022 ==> default: -- Cpus: 10 00:01:24.022 ==> default: -- Feature: acpi 00:01:24.022 ==> default: -- Feature: apic 00:01:24.022 ==> default: -- Feature: pae 00:01:24.022 ==> default: -- Memory: 12288M 00:01:24.022 ==> default: -- Memory Backing: hugepages: 00:01:24.022 ==> default: -- Management MAC: 00:01:24.022 ==> default: -- Loader: 00:01:24.022 ==> default: -- Nvram: 00:01:24.022 ==> default: -- Base box: spdk/fedora39 00:01:24.022 ==> default: -- Storage pool: default 00:01:24.022 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733924536_1c35277c521b9025a92c.img (20G) 00:01:24.022 ==> default: -- Volume Cache: default 00:01:24.022 ==> default: -- Kernel: 00:01:24.022 ==> default: -- Initrd: 00:01:24.022 ==> default: -- Graphics Type: vnc 00:01:24.022 ==> default: -- Graphics Port: -1 00:01:24.022 ==> default: -- Graphics IP: 127.0.0.1 00:01:24.022 ==> default: -- Graphics Password: Not defined 00:01:24.022 ==> default: -- Video Type: cirrus 00:01:24.022 ==> default: -- Video VRAM: 9216 00:01:24.022 ==> default: -- Sound Type: 00:01:24.022 ==> default: -- Keymap: en-us 00:01:24.022 ==> default: -- TPM Path: 00:01:24.022 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:24.022 ==> default: -- Command line args: 00:01:24.022 ==> default: -> value=-device, 00:01:24.022 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:24.022 ==> default: -> value=-drive, 00:01:24.022 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:24.022 ==> default: -> value=-device, 00:01:24.022 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.022 ==> default: -> value=-device, 00:01:24.022 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:24.022 ==> default: -> value=-drive, 00:01:24.022 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:24.022 ==> default: -> value=-device, 00:01:24.022 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.022 ==> default: -> value=-drive, 00:01:24.022 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:24.022 ==> default: -> value=-device, 00:01:24.022 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.022 ==> default: -> value=-drive, 00:01:24.022 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:24.022 ==> default: -> value=-device, 00:01:24.022 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.280 ==> default: Creating shared folders metadata... 00:01:24.280 ==> default: Starting domain. 00:01:26.183 ==> default: Waiting for domain to get an IP address... 00:01:44.259 ==> default: Waiting for SSH to become available... 00:01:44.259 ==> default: Configuring and enabling network interfaces... 00:01:47.544 default: SSH address: 192.168.121.57:22 00:01:47.544 default: SSH username: vagrant 00:01:47.544 default: SSH auth method: private key 00:01:49.447 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:57.585 ==> default: Mounting SSHFS shared folder... 00:01:58.532 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:58.533 ==> default: Checking Mount.. 00:01:59.932 ==> default: Folder Successfully Mounted! 00:01:59.932 ==> default: Running provisioner: file... 00:02:00.499 default: ~/.gitconfig => .gitconfig 00:02:00.758 00:02:00.758 SUCCESS! 00:02:00.758 00:02:00.758 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:00.758 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:00.758 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:00.758 00:02:00.768 [Pipeline] } 00:02:00.781 [Pipeline] // stage 00:02:00.789 [Pipeline] dir 00:02:00.790 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:00.791 [Pipeline] { 00:02:00.802 [Pipeline] catchError 00:02:00.803 [Pipeline] { 00:02:00.814 [Pipeline] sh 00:02:01.092 + vagrant ssh-config --host vagrant 00:02:01.092 + sed -ne /^Host/,$p 00:02:01.092 + tee ssh_conf 00:02:04.375 Host vagrant 00:02:04.375 HostName 192.168.121.57 00:02:04.375 User vagrant 00:02:04.375 Port 22 00:02:04.375 UserKnownHostsFile /dev/null 00:02:04.375 StrictHostKeyChecking no 00:02:04.375 PasswordAuthentication no 00:02:04.375 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:04.375 IdentitiesOnly yes 00:02:04.375 LogLevel FATAL 00:02:04.375 ForwardAgent yes 00:02:04.375 ForwardX11 yes 00:02:04.375 00:02:04.388 [Pipeline] withEnv 00:02:04.391 [Pipeline] { 00:02:04.404 [Pipeline] sh 00:02:04.683 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:04.683 source /etc/os-release 00:02:04.683 [[ -e /image.version ]] && img=$(< /image.version) 00:02:04.683 # Minimal, systemd-like check. 00:02:04.683 if [[ -e /.dockerenv ]]; then 00:02:04.683 # Clear garbage from the node's name: 00:02:04.683 # agt-er_autotest_547-896 -> autotest_547-896 00:02:04.683 # $HOSTNAME is the actual container id 00:02:04.683 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:04.683 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:04.683 # We can assume this is a mount from a host where container is running, 00:02:04.683 # so fetch its hostname to easily identify the target swarm worker. 00:02:04.683 container="$(< /etc/hostname) ($agent)" 00:02:04.683 else 00:02:04.683 # Fallback 00:02:04.683 container=$agent 00:02:04.683 fi 00:02:04.683 fi 00:02:04.683 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:04.683 00:02:04.952 [Pipeline] } 00:02:04.969 [Pipeline] // withEnv 00:02:04.978 [Pipeline] setCustomBuildProperty 00:02:04.992 [Pipeline] stage 00:02:04.995 [Pipeline] { (Tests) 00:02:05.012 [Pipeline] sh 00:02:05.290 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:05.560 [Pipeline] sh 00:02:05.837 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:06.108 [Pipeline] timeout 00:02:06.108 Timeout set to expire in 1 hr 0 min 00:02:06.110 [Pipeline] { 00:02:06.123 [Pipeline] sh 00:02:06.399 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:06.965 HEAD is now at 4dfeb7f95 mk/spdk.common.mk Use pattern substitution instead of prefix removal 00:02:06.976 [Pipeline] sh 00:02:07.253 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:07.524 [Pipeline] sh 00:02:07.819 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:08.114 [Pipeline] sh 00:02:08.426 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:08.684 ++ readlink -f spdk_repo 00:02:08.684 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:08.684 + [[ -n /home/vagrant/spdk_repo ]] 00:02:08.684 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:08.684 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:08.684 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:08.684 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:08.684 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:08.684 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:08.684 + cd /home/vagrant/spdk_repo 00:02:08.684 + source /etc/os-release 00:02:08.684 ++ NAME='Fedora Linux' 00:02:08.684 ++ VERSION='39 (Cloud Edition)' 00:02:08.684 ++ ID=fedora 00:02:08.684 ++ VERSION_ID=39 00:02:08.684 ++ VERSION_CODENAME= 00:02:08.684 ++ PLATFORM_ID=platform:f39 00:02:08.684 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:08.684 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:08.684 ++ LOGO=fedora-logo-icon 00:02:08.684 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:08.684 ++ HOME_URL=https://fedoraproject.org/ 00:02:08.684 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:08.684 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:08.684 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:08.684 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:08.684 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:08.684 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:08.684 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:08.684 ++ SUPPORT_END=2024-11-12 00:02:08.684 ++ VARIANT='Cloud Edition' 00:02:08.684 ++ VARIANT_ID=cloud 00:02:08.684 + uname -a 00:02:08.684 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:08.684 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:09.249 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:09.249 Hugepages 00:02:09.249 node hugesize free / total 00:02:09.249 node0 1048576kB 0 / 0 00:02:09.249 node0 2048kB 0 / 0 00:02:09.249 00:02:09.249 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:09.249 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:09.249 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:09.249 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:09.249 + rm -f /tmp/spdk-ld-path 00:02:09.250 + source autorun-spdk.conf 00:02:09.250 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.250 ++ SPDK_TEST_NVMF=1 00:02:09.250 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:09.250 ++ SPDK_TEST_URING=1 00:02:09.250 ++ SPDK_TEST_USDT=1 00:02:09.250 ++ SPDK_RUN_UBSAN=1 00:02:09.250 ++ NET_TYPE=virt 00:02:09.250 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.250 ++ RUN_NIGHTLY=0 00:02:09.250 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:09.250 + [[ -n '' ]] 00:02:09.250 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:09.250 + for M in /var/spdk/build-*-manifest.txt 00:02:09.250 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:09.250 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.250 + for M in /var/spdk/build-*-manifest.txt 00:02:09.250 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:09.250 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.250 + for M in /var/spdk/build-*-manifest.txt 00:02:09.250 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:09.250 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.250 ++ uname 00:02:09.250 + [[ Linux == \L\i\n\u\x ]] 00:02:09.250 + sudo dmesg -T 00:02:09.250 + sudo dmesg --clear 00:02:09.250 + dmesg_pid=5261 00:02:09.250 + [[ Fedora Linux == FreeBSD ]] 00:02:09.250 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:09.250 + sudo dmesg -Tw 00:02:09.250 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:09.250 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:09.250 + [[ -x /usr/src/fio-static/fio ]] 00:02:09.250 + export FIO_BIN=/usr/src/fio-static/fio 00:02:09.250 + FIO_BIN=/usr/src/fio-static/fio 00:02:09.250 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:09.250 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:09.250 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:09.250 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:09.250 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:09.250 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:09.250 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:09.250 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:09.250 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:09.508 13:43:02 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:09.508 13:43:02 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:09.508 13:43:02 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.508 13:43:02 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:09.508 13:43:02 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:09.508 13:43:02 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:09.508 13:43:02 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:02:09.508 13:43:02 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:09.508 13:43:02 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:02:09.508 13:43:02 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.508 13:43:02 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:09.508 13:43:02 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:09.508 13:43:02 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:09.508 13:43:02 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:09.508 13:43:02 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:09.508 13:43:02 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:09.508 13:43:02 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:09.508 13:43:02 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:09.508 13:43:02 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:09.508 13:43:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.508 13:43:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.508 13:43:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.508 13:43:02 -- paths/export.sh@5 -- $ export PATH 00:02:09.508 13:43:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.508 13:43:02 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:09.508 13:43:02 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:09.508 13:43:02 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733924582.XXXXXX 00:02:09.508 13:43:02 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733924582.5WU62r 00:02:09.508 13:43:02 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:09.508 13:43:02 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:09.508 13:43:02 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:09.508 13:43:02 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:09.508 13:43:02 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:09.508 13:43:02 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:09.508 13:43:02 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:09.508 13:43:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.508 13:43:02 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:02:09.508 13:43:02 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:09.508 13:43:02 -- pm/common@17 -- $ local monitor 00:02:09.508 13:43:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.508 13:43:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.508 13:43:02 -- pm/common@25 -- $ sleep 1 00:02:09.508 13:43:02 -- pm/common@21 -- $ date +%s 00:02:09.508 13:43:02 -- pm/common@21 -- $ date +%s 00:02:09.508 13:43:02 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733924582 00:02:09.508 13:43:02 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733924582 00:02:09.508 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733924582_collect-vmstat.pm.log 00:02:09.508 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733924582_collect-cpu-load.pm.log 00:02:10.442 13:43:03 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:10.442 13:43:03 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:10.442 13:43:03 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:10.442 13:43:03 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:10.442 13:43:03 -- spdk/autobuild.sh@16 -- $ date -u 00:02:10.442 Wed Dec 11 01:43:03 PM UTC 2024 00:02:10.442 13:43:03 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:10.442 v25.01-rc1-1-g4dfeb7f95 00:02:10.442 13:43:03 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:10.442 13:43:03 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:10.442 13:43:03 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:10.442 13:43:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:10.442 13:43:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:10.442 13:43:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.442 ************************************ 00:02:10.442 START TEST ubsan 00:02:10.442 ************************************ 00:02:10.442 using ubsan 00:02:10.442 13:43:03 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:10.442 00:02:10.442 real 0m0.000s 00:02:10.442 user 0m0.000s 00:02:10.442 sys 0m0.000s 00:02:10.442 13:43:03 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:10.442 13:43:03 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:10.442 ************************************ 00:02:10.442 END TEST ubsan 00:02:10.442 ************************************ 00:02:10.442 13:43:03 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:10.442 13:43:03 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:10.442 13:43:03 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:10.442 13:43:03 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:10.442 13:43:03 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:10.442 13:43:03 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:10.442 13:43:03 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:10.442 13:43:03 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:10.442 13:43:03 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:02:10.701 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:10.701 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:10.958 Using 'verbs' RDMA provider 00:02:24.555 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:39.452 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:39.452 Creating mk/config.mk...done. 00:02:39.452 Creating mk/cc.flags.mk...done. 00:02:39.452 Type 'make' to build. 00:02:39.452 13:43:30 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:39.452 13:43:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:39.452 13:43:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:39.452 13:43:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:39.452 ************************************ 00:02:39.452 START TEST make 00:02:39.452 ************************************ 00:02:39.452 13:43:30 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:51.670 The Meson build system 00:02:51.670 Version: 1.5.0 00:02:51.670 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:51.670 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:51.670 Build type: native build 00:02:51.670 Program cat found: YES (/usr/bin/cat) 00:02:51.670 Project name: DPDK 00:02:51.670 Project version: 24.03.0 00:02:51.670 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:51.670 C linker for the host machine: cc ld.bfd 2.40-14 00:02:51.670 Host machine cpu family: x86_64 00:02:51.670 Host machine cpu: x86_64 00:02:51.670 Message: ## Building in Developer Mode ## 00:02:51.670 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:51.670 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:51.670 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:51.670 Program python3 found: YES (/usr/bin/python3) 00:02:51.670 Program cat found: YES (/usr/bin/cat) 00:02:51.670 Compiler for C supports arguments -march=native: YES 00:02:51.670 Checking for size of "void *" : 8 00:02:51.670 Checking for size of "void *" : 8 (cached) 00:02:51.670 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:51.670 Library m found: YES 00:02:51.670 Library numa found: YES 00:02:51.670 Has header "numaif.h" : YES 00:02:51.670 Library fdt found: NO 00:02:51.670 Library execinfo found: NO 00:02:51.670 Has header "execinfo.h" : YES 00:02:51.670 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:51.670 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:51.670 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:51.670 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:51.670 Run-time dependency openssl found: YES 3.1.1 00:02:51.670 Run-time dependency libpcap found: YES 1.10.4 00:02:51.670 Has header "pcap.h" with dependency libpcap: YES 00:02:51.670 Compiler for C supports arguments -Wcast-qual: YES 00:02:51.670 Compiler for C supports arguments -Wdeprecated: YES 00:02:51.670 Compiler for C supports arguments -Wformat: YES 00:02:51.670 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:51.670 Compiler for C supports arguments -Wformat-security: NO 00:02:51.670 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:51.670 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:51.670 Compiler for C supports arguments -Wnested-externs: YES 00:02:51.670 Compiler for C supports arguments -Wold-style-definition: YES 00:02:51.670 Compiler for C supports arguments -Wpointer-arith: YES 00:02:51.670 Compiler for C supports arguments -Wsign-compare: YES 00:02:51.670 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:51.670 Compiler for C supports arguments -Wundef: YES 00:02:51.670 Compiler for C supports arguments -Wwrite-strings: YES 00:02:51.670 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:51.670 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:51.670 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:51.670 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:51.670 Program objdump found: YES (/usr/bin/objdump) 00:02:51.670 Compiler for C supports arguments -mavx512f: YES 00:02:51.670 Checking if "AVX512 checking" compiles: YES 00:02:51.670 Fetching value of define "__SSE4_2__" : 1 00:02:51.671 Fetching value of define "__AES__" : 1 00:02:51.671 Fetching value of define "__AVX__" : 1 00:02:51.671 Fetching value of define "__AVX2__" : 1 00:02:51.671 Fetching value of define "__AVX512BW__" : (undefined) 00:02:51.671 Fetching value of define "__AVX512CD__" : (undefined) 00:02:51.671 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:51.671 Fetching value of define "__AVX512F__" : (undefined) 00:02:51.671 Fetching value of define "__AVX512VL__" : (undefined) 00:02:51.671 Fetching value of define "__PCLMUL__" : 1 00:02:51.671 Fetching value of define "__RDRND__" : 1 00:02:51.671 Fetching value of define "__RDSEED__" : 1 00:02:51.671 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:51.671 Fetching value of define "__znver1__" : (undefined) 00:02:51.671 Fetching value of define "__znver2__" : (undefined) 00:02:51.671 Fetching value of define "__znver3__" : (undefined) 00:02:51.671 Fetching value of define "__znver4__" : (undefined) 00:02:51.671 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:51.671 Message: lib/log: Defining dependency "log" 00:02:51.671 Message: lib/kvargs: Defining dependency "kvargs" 00:02:51.671 Message: lib/telemetry: Defining dependency "telemetry" 00:02:51.671 Checking for function "getentropy" : NO 00:02:51.671 Message: lib/eal: Defining dependency "eal" 00:02:51.671 Message: lib/ring: Defining dependency "ring" 00:02:51.671 Message: lib/rcu: Defining dependency "rcu" 00:02:51.671 Message: lib/mempool: Defining dependency "mempool" 00:02:51.671 Message: lib/mbuf: Defining dependency "mbuf" 00:02:51.671 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:51.671 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:51.671 Compiler for C supports arguments -mpclmul: YES 00:02:51.671 Compiler for C supports arguments -maes: YES 00:02:51.671 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:51.671 Compiler for C supports arguments -mavx512bw: YES 00:02:51.671 Compiler for C supports arguments -mavx512dq: YES 00:02:51.671 Compiler for C supports arguments -mavx512vl: YES 00:02:51.671 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:51.671 Compiler for C supports arguments -mavx2: YES 00:02:51.671 Compiler for C supports arguments -mavx: YES 00:02:51.671 Message: lib/net: Defining dependency "net" 00:02:51.671 Message: lib/meter: Defining dependency "meter" 00:02:51.671 Message: lib/ethdev: Defining dependency "ethdev" 00:02:51.671 Message: lib/pci: Defining dependency "pci" 00:02:51.671 Message: lib/cmdline: Defining dependency "cmdline" 00:02:51.671 Message: lib/hash: Defining dependency "hash" 00:02:51.671 Message: lib/timer: Defining dependency "timer" 00:02:51.671 Message: lib/compressdev: Defining dependency "compressdev" 00:02:51.671 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:51.671 Message: lib/dmadev: Defining dependency "dmadev" 00:02:51.671 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:51.671 Message: lib/power: Defining dependency "power" 00:02:51.671 Message: lib/reorder: Defining dependency "reorder" 00:02:51.671 Message: lib/security: Defining dependency "security" 00:02:51.671 Has header "linux/userfaultfd.h" : YES 00:02:51.671 Has header "linux/vduse.h" : YES 00:02:51.671 Message: lib/vhost: Defining dependency "vhost" 00:02:51.671 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:51.671 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:51.671 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:51.671 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:51.671 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:51.671 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:51.671 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:51.671 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:51.671 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:51.671 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:51.671 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:51.671 Configuring doxy-api-html.conf using configuration 00:02:51.671 Configuring doxy-api-man.conf using configuration 00:02:51.671 Program mandb found: YES (/usr/bin/mandb) 00:02:51.671 Program sphinx-build found: NO 00:02:51.671 Configuring rte_build_config.h using configuration 00:02:51.671 Message: 00:02:51.671 ================= 00:02:51.671 Applications Enabled 00:02:51.671 ================= 00:02:51.671 00:02:51.671 apps: 00:02:51.671 00:02:51.671 00:02:51.671 Message: 00:02:51.671 ================= 00:02:51.671 Libraries Enabled 00:02:51.671 ================= 00:02:51.671 00:02:51.671 libs: 00:02:51.671 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:51.671 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:51.671 cryptodev, dmadev, power, reorder, security, vhost, 00:02:51.671 00:02:51.671 Message: 00:02:51.671 =============== 00:02:51.671 Drivers Enabled 00:02:51.671 =============== 00:02:51.671 00:02:51.671 common: 00:02:51.671 00:02:51.671 bus: 00:02:51.671 pci, vdev, 00:02:51.671 mempool: 00:02:51.671 ring, 00:02:51.671 dma: 00:02:51.671 00:02:51.671 net: 00:02:51.671 00:02:51.671 crypto: 00:02:51.671 00:02:51.671 compress: 00:02:51.671 00:02:51.671 vdpa: 00:02:51.671 00:02:51.671 00:02:51.671 Message: 00:02:51.671 ================= 00:02:51.671 Content Skipped 00:02:51.671 ================= 00:02:51.671 00:02:51.671 apps: 00:02:51.671 dumpcap: explicitly disabled via build config 00:02:51.671 graph: explicitly disabled via build config 00:02:51.671 pdump: explicitly disabled via build config 00:02:51.671 proc-info: explicitly disabled via build config 00:02:51.671 test-acl: explicitly disabled via build config 00:02:51.671 test-bbdev: explicitly disabled via build config 00:02:51.671 test-cmdline: explicitly disabled via build config 00:02:51.671 test-compress-perf: explicitly disabled via build config 00:02:51.671 test-crypto-perf: explicitly disabled via build config 00:02:51.671 test-dma-perf: explicitly disabled via build config 00:02:51.671 test-eventdev: explicitly disabled via build config 00:02:51.671 test-fib: explicitly disabled via build config 00:02:51.671 test-flow-perf: explicitly disabled via build config 00:02:51.671 test-gpudev: explicitly disabled via build config 00:02:51.671 test-mldev: explicitly disabled via build config 00:02:51.671 test-pipeline: explicitly disabled via build config 00:02:51.671 test-pmd: explicitly disabled via build config 00:02:51.671 test-regex: explicitly disabled via build config 00:02:51.671 test-sad: explicitly disabled via build config 00:02:51.671 test-security-perf: explicitly disabled via build config 00:02:51.671 00:02:51.671 libs: 00:02:51.671 argparse: explicitly disabled via build config 00:02:51.671 metrics: explicitly disabled via build config 00:02:51.671 acl: explicitly disabled via build config 00:02:51.671 bbdev: explicitly disabled via build config 00:02:51.671 bitratestats: explicitly disabled via build config 00:02:51.671 bpf: explicitly disabled via build config 00:02:51.671 cfgfile: explicitly disabled via build config 00:02:51.671 distributor: explicitly disabled via build config 00:02:51.671 efd: explicitly disabled via build config 00:02:51.671 eventdev: explicitly disabled via build config 00:02:51.671 dispatcher: explicitly disabled via build config 00:02:51.671 gpudev: explicitly disabled via build config 00:02:51.671 gro: explicitly disabled via build config 00:02:51.671 gso: explicitly disabled via build config 00:02:51.671 ip_frag: explicitly disabled via build config 00:02:51.671 jobstats: explicitly disabled via build config 00:02:51.671 latencystats: explicitly disabled via build config 00:02:51.671 lpm: explicitly disabled via build config 00:02:51.671 member: explicitly disabled via build config 00:02:51.671 pcapng: explicitly disabled via build config 00:02:51.671 rawdev: explicitly disabled via build config 00:02:51.671 regexdev: explicitly disabled via build config 00:02:51.671 mldev: explicitly disabled via build config 00:02:51.671 rib: explicitly disabled via build config 00:02:51.671 sched: explicitly disabled via build config 00:02:51.671 stack: explicitly disabled via build config 00:02:51.671 ipsec: explicitly disabled via build config 00:02:51.671 pdcp: explicitly disabled via build config 00:02:51.671 fib: explicitly disabled via build config 00:02:51.671 port: explicitly disabled via build config 00:02:51.671 pdump: explicitly disabled via build config 00:02:51.671 table: explicitly disabled via build config 00:02:51.671 pipeline: explicitly disabled via build config 00:02:51.671 graph: explicitly disabled via build config 00:02:51.671 node: explicitly disabled via build config 00:02:51.671 00:02:51.671 drivers: 00:02:51.671 common/cpt: not in enabled drivers build config 00:02:51.671 common/dpaax: not in enabled drivers build config 00:02:51.671 common/iavf: not in enabled drivers build config 00:02:51.671 common/idpf: not in enabled drivers build config 00:02:51.671 common/ionic: not in enabled drivers build config 00:02:51.671 common/mvep: not in enabled drivers build config 00:02:51.671 common/octeontx: not in enabled drivers build config 00:02:51.671 bus/auxiliary: not in enabled drivers build config 00:02:51.671 bus/cdx: not in enabled drivers build config 00:02:51.671 bus/dpaa: not in enabled drivers build config 00:02:51.671 bus/fslmc: not in enabled drivers build config 00:02:51.671 bus/ifpga: not in enabled drivers build config 00:02:51.671 bus/platform: not in enabled drivers build config 00:02:51.671 bus/uacce: not in enabled drivers build config 00:02:51.671 bus/vmbus: not in enabled drivers build config 00:02:51.671 common/cnxk: not in enabled drivers build config 00:02:51.671 common/mlx5: not in enabled drivers build config 00:02:51.671 common/nfp: not in enabled drivers build config 00:02:51.671 common/nitrox: not in enabled drivers build config 00:02:51.671 common/qat: not in enabled drivers build config 00:02:51.671 common/sfc_efx: not in enabled drivers build config 00:02:51.671 mempool/bucket: not in enabled drivers build config 00:02:51.671 mempool/cnxk: not in enabled drivers build config 00:02:51.671 mempool/dpaa: not in enabled drivers build config 00:02:51.671 mempool/dpaa2: not in enabled drivers build config 00:02:51.671 mempool/octeontx: not in enabled drivers build config 00:02:51.671 mempool/stack: not in enabled drivers build config 00:02:51.671 dma/cnxk: not in enabled drivers build config 00:02:51.671 dma/dpaa: not in enabled drivers build config 00:02:51.671 dma/dpaa2: not in enabled drivers build config 00:02:51.671 dma/hisilicon: not in enabled drivers build config 00:02:51.671 dma/idxd: not in enabled drivers build config 00:02:51.671 dma/ioat: not in enabled drivers build config 00:02:51.671 dma/skeleton: not in enabled drivers build config 00:02:51.671 net/af_packet: not in enabled drivers build config 00:02:51.672 net/af_xdp: not in enabled drivers build config 00:02:51.672 net/ark: not in enabled drivers build config 00:02:51.672 net/atlantic: not in enabled drivers build config 00:02:51.672 net/avp: not in enabled drivers build config 00:02:51.672 net/axgbe: not in enabled drivers build config 00:02:51.672 net/bnx2x: not in enabled drivers build config 00:02:51.672 net/bnxt: not in enabled drivers build config 00:02:51.672 net/bonding: not in enabled drivers build config 00:02:51.672 net/cnxk: not in enabled drivers build config 00:02:51.672 net/cpfl: not in enabled drivers build config 00:02:51.672 net/cxgbe: not in enabled drivers build config 00:02:51.672 net/dpaa: not in enabled drivers build config 00:02:51.672 net/dpaa2: not in enabled drivers build config 00:02:51.672 net/e1000: not in enabled drivers build config 00:02:51.672 net/ena: not in enabled drivers build config 00:02:51.672 net/enetc: not in enabled drivers build config 00:02:51.672 net/enetfec: not in enabled drivers build config 00:02:51.672 net/enic: not in enabled drivers build config 00:02:51.672 net/failsafe: not in enabled drivers build config 00:02:51.672 net/fm10k: not in enabled drivers build config 00:02:51.672 net/gve: not in enabled drivers build config 00:02:51.672 net/hinic: not in enabled drivers build config 00:02:51.672 net/hns3: not in enabled drivers build config 00:02:51.672 net/i40e: not in enabled drivers build config 00:02:51.672 net/iavf: not in enabled drivers build config 00:02:51.672 net/ice: not in enabled drivers build config 00:02:51.672 net/idpf: not in enabled drivers build config 00:02:51.672 net/igc: not in enabled drivers build config 00:02:51.672 net/ionic: not in enabled drivers build config 00:02:51.672 net/ipn3ke: not in enabled drivers build config 00:02:51.672 net/ixgbe: not in enabled drivers build config 00:02:51.672 net/mana: not in enabled drivers build config 00:02:51.672 net/memif: not in enabled drivers build config 00:02:51.672 net/mlx4: not in enabled drivers build config 00:02:51.672 net/mlx5: not in enabled drivers build config 00:02:51.672 net/mvneta: not in enabled drivers build config 00:02:51.672 net/mvpp2: not in enabled drivers build config 00:02:51.672 net/netvsc: not in enabled drivers build config 00:02:51.672 net/nfb: not in enabled drivers build config 00:02:51.672 net/nfp: not in enabled drivers build config 00:02:51.672 net/ngbe: not in enabled drivers build config 00:02:51.672 net/null: not in enabled drivers build config 00:02:51.672 net/octeontx: not in enabled drivers build config 00:02:51.672 net/octeon_ep: not in enabled drivers build config 00:02:51.672 net/pcap: not in enabled drivers build config 00:02:51.672 net/pfe: not in enabled drivers build config 00:02:51.672 net/qede: not in enabled drivers build config 00:02:51.672 net/ring: not in enabled drivers build config 00:02:51.672 net/sfc: not in enabled drivers build config 00:02:51.672 net/softnic: not in enabled drivers build config 00:02:51.672 net/tap: not in enabled drivers build config 00:02:51.672 net/thunderx: not in enabled drivers build config 00:02:51.672 net/txgbe: not in enabled drivers build config 00:02:51.672 net/vdev_netvsc: not in enabled drivers build config 00:02:51.672 net/vhost: not in enabled drivers build config 00:02:51.672 net/virtio: not in enabled drivers build config 00:02:51.672 net/vmxnet3: not in enabled drivers build config 00:02:51.672 raw/*: missing internal dependency, "rawdev" 00:02:51.672 crypto/armv8: not in enabled drivers build config 00:02:51.672 crypto/bcmfs: not in enabled drivers build config 00:02:51.672 crypto/caam_jr: not in enabled drivers build config 00:02:51.672 crypto/ccp: not in enabled drivers build config 00:02:51.672 crypto/cnxk: not in enabled drivers build config 00:02:51.672 crypto/dpaa_sec: not in enabled drivers build config 00:02:51.672 crypto/dpaa2_sec: not in enabled drivers build config 00:02:51.672 crypto/ipsec_mb: not in enabled drivers build config 00:02:51.672 crypto/mlx5: not in enabled drivers build config 00:02:51.672 crypto/mvsam: not in enabled drivers build config 00:02:51.672 crypto/nitrox: not in enabled drivers build config 00:02:51.672 crypto/null: not in enabled drivers build config 00:02:51.672 crypto/octeontx: not in enabled drivers build config 00:02:51.672 crypto/openssl: not in enabled drivers build config 00:02:51.672 crypto/scheduler: not in enabled drivers build config 00:02:51.672 crypto/uadk: not in enabled drivers build config 00:02:51.672 crypto/virtio: not in enabled drivers build config 00:02:51.672 compress/isal: not in enabled drivers build config 00:02:51.672 compress/mlx5: not in enabled drivers build config 00:02:51.672 compress/nitrox: not in enabled drivers build config 00:02:51.672 compress/octeontx: not in enabled drivers build config 00:02:51.672 compress/zlib: not in enabled drivers build config 00:02:51.672 regex/*: missing internal dependency, "regexdev" 00:02:51.672 ml/*: missing internal dependency, "mldev" 00:02:51.672 vdpa/ifc: not in enabled drivers build config 00:02:51.672 vdpa/mlx5: not in enabled drivers build config 00:02:51.672 vdpa/nfp: not in enabled drivers build config 00:02:51.672 vdpa/sfc: not in enabled drivers build config 00:02:51.672 event/*: missing internal dependency, "eventdev" 00:02:51.672 baseband/*: missing internal dependency, "bbdev" 00:02:51.672 gpu/*: missing internal dependency, "gpudev" 00:02:51.672 00:02:51.672 00:02:51.672 Build targets in project: 85 00:02:51.672 00:02:51.672 DPDK 24.03.0 00:02:51.672 00:02:51.672 User defined options 00:02:51.672 buildtype : debug 00:02:51.672 default_library : shared 00:02:51.672 libdir : lib 00:02:51.672 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:51.672 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:51.672 c_link_args : 00:02:51.672 cpu_instruction_set: native 00:02:51.672 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:51.672 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:51.672 enable_docs : false 00:02:51.672 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:51.672 enable_kmods : false 00:02:51.672 max_lcores : 128 00:02:51.672 tests : false 00:02:51.672 00:02:51.672 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:51.672 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:51.672 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:51.672 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:51.672 [3/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:51.672 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:51.672 [5/268] Linking static target lib/librte_kvargs.a 00:02:51.672 [6/268] Linking static target lib/librte_log.a 00:02:51.930 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.930 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:52.191 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:52.191 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:52.191 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:52.191 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:52.448 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:52.448 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:52.448 [15/268] Linking static target lib/librte_telemetry.a 00:02:52.448 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:52.448 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:52.448 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:52.448 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.448 [20/268] Linking target lib/librte_log.so.24.1 00:02:53.014 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:53.015 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:53.015 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:53.272 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:53.272 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:53.272 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:53.272 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:53.272 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:53.272 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:53.272 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:53.273 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.273 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:53.273 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:53.273 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:53.530 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:53.530 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:53.788 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:54.046 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:54.046 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:54.046 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:54.046 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:54.304 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:54.304 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:54.304 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:54.304 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:54.304 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:54.304 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:54.304 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:54.304 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:54.563 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:54.822 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:55.103 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:55.103 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:55.103 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:55.361 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:55.361 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:55.361 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:55.361 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:55.361 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:55.361 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:55.361 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:55.928 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:55.928 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:55.928 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:55.928 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:56.187 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:56.446 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:56.446 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:56.446 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:56.446 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:56.446 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:56.446 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:56.446 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:56.704 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:56.704 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:56.704 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:56.704 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:56.704 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:56.963 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:57.221 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:57.221 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:57.221 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:57.221 [83/268] Linking static target lib/librte_ring.a 00:02:57.221 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:57.221 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:57.478 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:57.478 [87/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:57.478 [88/268] Linking static target lib/librte_rcu.a 00:02:57.479 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:57.479 [90/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:57.739 [91/268] Linking static target lib/librte_eal.a 00:02:57.739 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:57.739 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.739 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:57.739 [95/268] Linking static target lib/librte_mempool.a 00:02:57.997 [96/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.997 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:57.997 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:57.997 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:57.997 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:57.997 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:58.255 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:58.255 [103/268] Linking static target lib/librte_mbuf.a 00:02:58.513 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:58.513 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:58.771 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:58.771 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:58.771 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:58.771 [109/268] Linking static target lib/librte_net.a 00:02:59.030 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:59.030 [111/268] Linking static target lib/librte_meter.a 00:02:59.030 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:59.289 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:59.289 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.289 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:59.289 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.289 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.289 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.547 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:59.805 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:00.064 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:00.322 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:00.322 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:00.322 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:00.322 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:00.322 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:00.322 [127/268] Linking static target lib/librte_pci.a 00:03:00.581 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:00.581 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:00.581 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:00.581 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:00.581 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:00.840 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:00.840 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.840 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:00.840 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:00.840 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:01.098 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:01.099 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:01.099 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:01.099 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:01.099 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:01.099 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:01.099 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:01.099 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:01.099 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:01.357 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:01.357 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:01.357 [149/268] Linking static target lib/librte_cmdline.a 00:03:01.357 [150/268] Linking static target lib/librte_ethdev.a 00:03:01.614 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:01.614 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:01.873 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:01.873 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:01.873 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:01.873 [156/268] Linking static target lib/librte_timer.a 00:03:01.873 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:02.132 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:02.132 [159/268] Linking static target lib/librte_hash.a 00:03:02.390 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:02.390 [161/268] Linking static target lib/librte_compressdev.a 00:03:02.390 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:02.650 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.650 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:02.650 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:02.650 [166/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:02.908 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:02.908 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:02.908 [169/268] Linking static target lib/librte_dmadev.a 00:03:03.167 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.167 [171/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:03.167 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:03.167 [173/268] Linking static target lib/librte_cryptodev.a 00:03:03.167 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:03.167 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:03.425 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.425 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:03.425 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.993 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.993 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:03.993 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:03.993 [182/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:03.993 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:03.993 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:04.252 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:04.252 [186/268] Linking static target lib/librte_power.a 00:03:04.252 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:04.252 [188/268] Linking static target lib/librte_reorder.a 00:03:04.510 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:04.769 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:04.769 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:04.769 [192/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.038 [193/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:05.038 [194/268] Linking static target lib/librte_security.a 00:03:05.038 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:05.296 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.555 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:05.555 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:05.555 [199/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.555 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:05.813 [201/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.813 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:06.071 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:06.330 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:06.330 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:06.330 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:06.330 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:06.588 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:06.588 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:06.588 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:06.588 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:06.588 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:06.846 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:06.846 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:06.846 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:06.846 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:06.846 [217/268] Linking static target drivers/librte_bus_vdev.a 00:03:06.846 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:06.846 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:06.847 [220/268] Linking static target drivers/librte_bus_pci.a 00:03:06.847 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:06.847 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:07.105 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.105 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:07.105 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:07.105 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:07.105 [227/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:07.363 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.621 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:07.879 [230/268] Linking static target lib/librte_vhost.a 00:03:09.254 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.254 [232/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.254 [233/268] Linking target lib/librte_eal.so.24.1 00:03:09.254 [234/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:09.254 [235/268] Linking target lib/librte_pci.so.24.1 00:03:09.254 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:09.254 [237/268] Linking target lib/librte_ring.so.24.1 00:03:09.254 [238/268] Linking target lib/librte_timer.so.24.1 00:03:09.254 [239/268] Linking target lib/librte_meter.so.24.1 00:03:09.254 [240/268] Linking target lib/librte_dmadev.so.24.1 00:03:09.254 [241/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.513 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:09.513 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:09.513 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:09.513 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:09.513 [246/268] Linking target lib/librte_rcu.so.24.1 00:03:09.513 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:09.513 [248/268] Linking target lib/librte_mempool.so.24.1 00:03:09.513 [249/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:09.771 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:09.771 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:09.771 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:09.771 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:09.771 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:10.038 [255/268] Linking target lib/librte_net.so.24.1 00:03:10.038 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:10.038 [257/268] Linking target lib/librte_compressdev.so.24.1 00:03:10.038 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:10.038 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:10.038 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:10.038 [261/268] Linking target lib/librte_hash.so.24.1 00:03:10.038 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:10.038 [263/268] Linking target lib/librte_security.so.24.1 00:03:10.038 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:10.312 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:10.313 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:10.313 [267/268] Linking target lib/librte_power.so.24.1 00:03:10.313 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:10.313 INFO: autodetecting backend as ninja 00:03:10.313 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:36.852 CC lib/ut/ut.o 00:03:36.852 CC lib/ut_mock/mock.o 00:03:36.852 CC lib/log/log.o 00:03:36.852 CC lib/log/log_deprecated.o 00:03:36.852 CC lib/log/log_flags.o 00:03:36.852 LIB libspdk_ut.a 00:03:36.852 LIB libspdk_ut_mock.a 00:03:36.852 SO libspdk_ut.so.2.0 00:03:36.852 LIB libspdk_log.a 00:03:36.852 SO libspdk_ut_mock.so.6.0 00:03:36.852 SO libspdk_log.so.7.1 00:03:36.852 SYMLINK libspdk_ut.so 00:03:36.852 SYMLINK libspdk_ut_mock.so 00:03:36.852 SYMLINK libspdk_log.so 00:03:36.852 CC lib/ioat/ioat.o 00:03:36.852 CXX lib/trace_parser/trace.o 00:03:36.852 CC lib/dma/dma.o 00:03:36.852 CC lib/util/base64.o 00:03:36.852 CC lib/util/bit_array.o 00:03:36.852 CC lib/util/crc16.o 00:03:36.852 CC lib/util/cpuset.o 00:03:36.852 CC lib/util/crc32.o 00:03:36.852 CC lib/util/crc32c.o 00:03:36.852 CC lib/vfio_user/host/vfio_user_pci.o 00:03:36.852 CC lib/vfio_user/host/vfio_user.o 00:03:36.852 CC lib/util/crc32_ieee.o 00:03:36.852 CC lib/util/crc64.o 00:03:36.852 CC lib/util/dif.o 00:03:36.852 CC lib/util/fd.o 00:03:36.852 LIB libspdk_dma.a 00:03:37.111 SO libspdk_dma.so.5.0 00:03:37.111 CC lib/util/fd_group.o 00:03:37.111 CC lib/util/file.o 00:03:37.111 CC lib/util/hexlify.o 00:03:37.111 LIB libspdk_ioat.a 00:03:37.111 SYMLINK libspdk_dma.so 00:03:37.111 CC lib/util/iov.o 00:03:37.111 SO libspdk_ioat.so.7.0 00:03:37.111 CC lib/util/math.o 00:03:37.111 CC lib/util/net.o 00:03:37.111 LIB libspdk_vfio_user.a 00:03:37.111 SYMLINK libspdk_ioat.so 00:03:37.111 CC lib/util/pipe.o 00:03:37.111 SO libspdk_vfio_user.so.5.0 00:03:37.111 CC lib/util/strerror_tls.o 00:03:37.111 CC lib/util/string.o 00:03:37.111 SYMLINK libspdk_vfio_user.so 00:03:37.111 CC lib/util/uuid.o 00:03:37.111 CC lib/util/xor.o 00:03:37.369 CC lib/util/zipf.o 00:03:37.369 CC lib/util/md5.o 00:03:37.627 LIB libspdk_util.a 00:03:37.627 SO libspdk_util.so.10.1 00:03:37.627 LIB libspdk_trace_parser.a 00:03:37.885 SO libspdk_trace_parser.so.6.0 00:03:37.885 SYMLINK libspdk_util.so 00:03:37.885 SYMLINK libspdk_trace_parser.so 00:03:38.143 CC lib/idxd/idxd.o 00:03:38.143 CC lib/conf/conf.o 00:03:38.143 CC lib/idxd/idxd_user.o 00:03:38.143 CC lib/json/json_parse.o 00:03:38.143 CC lib/idxd/idxd_kernel.o 00:03:38.143 CC lib/json/json_util.o 00:03:38.143 CC lib/json/json_write.o 00:03:38.143 CC lib/rdma_utils/rdma_utils.o 00:03:38.143 CC lib/env_dpdk/env.o 00:03:38.143 CC lib/vmd/vmd.o 00:03:38.143 CC lib/vmd/led.o 00:03:38.401 LIB libspdk_conf.a 00:03:38.401 CC lib/env_dpdk/memory.o 00:03:38.401 CC lib/env_dpdk/pci.o 00:03:38.401 SO libspdk_conf.so.6.0 00:03:38.401 CC lib/env_dpdk/init.o 00:03:38.401 LIB libspdk_json.a 00:03:38.401 SYMLINK libspdk_conf.so 00:03:38.401 CC lib/env_dpdk/threads.o 00:03:38.401 SO libspdk_json.so.6.0 00:03:38.401 LIB libspdk_rdma_utils.a 00:03:38.401 CC lib/env_dpdk/pci_ioat.o 00:03:38.401 SO libspdk_rdma_utils.so.1.0 00:03:38.659 SYMLINK libspdk_json.so 00:03:38.659 CC lib/env_dpdk/pci_virtio.o 00:03:38.659 SYMLINK libspdk_rdma_utils.so 00:03:38.659 CC lib/env_dpdk/pci_vmd.o 00:03:38.659 CC lib/env_dpdk/pci_idxd.o 00:03:38.659 LIB libspdk_idxd.a 00:03:38.659 CC lib/env_dpdk/pci_event.o 00:03:38.659 SO libspdk_idxd.so.12.1 00:03:38.659 CC lib/env_dpdk/sigbus_handler.o 00:03:38.659 CC lib/jsonrpc/jsonrpc_server.o 00:03:38.659 CC lib/env_dpdk/pci_dpdk.o 00:03:38.659 LIB libspdk_vmd.a 00:03:38.659 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:38.917 SO libspdk_vmd.so.6.0 00:03:38.917 SYMLINK libspdk_idxd.so 00:03:38.917 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:38.917 SYMLINK libspdk_vmd.so 00:03:38.917 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:38.917 CC lib/jsonrpc/jsonrpc_client.o 00:03:38.917 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:38.917 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:38.917 CC lib/rdma_provider/common.o 00:03:39.175 LIB libspdk_jsonrpc.a 00:03:39.175 SO libspdk_jsonrpc.so.6.0 00:03:39.432 LIB libspdk_rdma_provider.a 00:03:39.432 SYMLINK libspdk_jsonrpc.so 00:03:39.432 SO libspdk_rdma_provider.so.7.0 00:03:39.432 SYMLINK libspdk_rdma_provider.so 00:03:39.690 CC lib/rpc/rpc.o 00:03:39.690 LIB libspdk_env_dpdk.a 00:03:39.690 SO libspdk_env_dpdk.so.15.1 00:03:39.948 LIB libspdk_rpc.a 00:03:39.948 SO libspdk_rpc.so.6.0 00:03:39.948 SYMLINK libspdk_rpc.so 00:03:39.948 SYMLINK libspdk_env_dpdk.so 00:03:40.206 CC lib/trace/trace_flags.o 00:03:40.206 CC lib/trace/trace.o 00:03:40.206 CC lib/trace/trace_rpc.o 00:03:40.206 CC lib/keyring/keyring.o 00:03:40.206 CC lib/notify/notify.o 00:03:40.206 CC lib/keyring/keyring_rpc.o 00:03:40.206 CC lib/notify/notify_rpc.o 00:03:40.463 LIB libspdk_notify.a 00:03:40.463 SO libspdk_notify.so.6.0 00:03:40.463 LIB libspdk_trace.a 00:03:40.463 LIB libspdk_keyring.a 00:03:40.463 SYMLINK libspdk_notify.so 00:03:40.463 SO libspdk_trace.so.11.0 00:03:40.463 SO libspdk_keyring.so.2.0 00:03:40.463 SYMLINK libspdk_trace.so 00:03:40.464 SYMLINK libspdk_keyring.so 00:03:40.722 CC lib/thread/thread.o 00:03:40.722 CC lib/thread/iobuf.o 00:03:40.722 CC lib/sock/sock.o 00:03:40.722 CC lib/sock/sock_rpc.o 00:03:41.286 LIB libspdk_sock.a 00:03:41.286 SO libspdk_sock.so.10.0 00:03:41.286 SYMLINK libspdk_sock.so 00:03:41.545 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:41.545 CC lib/nvme/nvme_ctrlr.o 00:03:41.545 CC lib/nvme/nvme_fabric.o 00:03:41.545 CC lib/nvme/nvme_ns_cmd.o 00:03:41.545 CC lib/nvme/nvme_ns.o 00:03:41.545 CC lib/nvme/nvme_pcie_common.o 00:03:41.545 CC lib/nvme/nvme_pcie.o 00:03:41.545 CC lib/nvme/nvme_qpair.o 00:03:41.545 CC lib/nvme/nvme.o 00:03:42.476 CC lib/nvme/nvme_quirks.o 00:03:42.477 CC lib/nvme/nvme_transport.o 00:03:42.477 LIB libspdk_thread.a 00:03:42.477 CC lib/nvme/nvme_discovery.o 00:03:42.477 SO libspdk_thread.so.11.0 00:03:42.733 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:42.733 SYMLINK libspdk_thread.so 00:03:42.733 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:42.733 CC lib/nvme/nvme_tcp.o 00:03:42.733 CC lib/nvme/nvme_opal.o 00:03:42.733 CC lib/nvme/nvme_io_msg.o 00:03:42.992 CC lib/nvme/nvme_poll_group.o 00:03:43.250 CC lib/nvme/nvme_zns.o 00:03:43.250 CC lib/nvme/nvme_stubs.o 00:03:43.250 CC lib/nvme/nvme_auth.o 00:03:43.250 CC lib/nvme/nvme_cuse.o 00:03:43.250 CC lib/nvme/nvme_rdma.o 00:03:43.508 CC lib/accel/accel.o 00:03:43.765 CC lib/blob/blobstore.o 00:03:43.765 CC lib/accel/accel_rpc.o 00:03:43.765 CC lib/blob/request.o 00:03:43.765 CC lib/blob/zeroes.o 00:03:44.023 CC lib/accel/accel_sw.o 00:03:44.023 CC lib/blob/blob_bs_dev.o 00:03:44.281 CC lib/init/json_config.o 00:03:44.281 CC lib/init/subsystem.o 00:03:44.281 CC lib/init/subsystem_rpc.o 00:03:44.281 CC lib/init/rpc.o 00:03:44.539 CC lib/virtio/virtio.o 00:03:44.540 CC lib/virtio/virtio_vhost_user.o 00:03:44.540 CC lib/fsdev/fsdev.o 00:03:44.540 CC lib/fsdev/fsdev_io.o 00:03:44.540 CC lib/virtio/virtio_vfio_user.o 00:03:44.540 CC lib/virtio/virtio_pci.o 00:03:44.540 LIB libspdk_init.a 00:03:44.540 SO libspdk_init.so.6.0 00:03:44.797 SYMLINK libspdk_init.so 00:03:44.797 CC lib/fsdev/fsdev_rpc.o 00:03:44.797 LIB libspdk_accel.a 00:03:44.797 SO libspdk_accel.so.16.0 00:03:44.797 LIB libspdk_virtio.a 00:03:44.797 SYMLINK libspdk_accel.so 00:03:44.797 SO libspdk_virtio.so.7.0 00:03:44.797 LIB libspdk_nvme.a 00:03:45.055 CC lib/event/app.o 00:03:45.055 CC lib/event/reactor.o 00:03:45.055 CC lib/event/log_rpc.o 00:03:45.055 CC lib/event/scheduler_static.o 00:03:45.055 CC lib/event/app_rpc.o 00:03:45.055 SYMLINK libspdk_virtio.so 00:03:45.055 CC lib/bdev/bdev.o 00:03:45.055 CC lib/bdev/bdev_rpc.o 00:03:45.055 SO libspdk_nvme.so.15.0 00:03:45.055 LIB libspdk_fsdev.a 00:03:45.055 CC lib/bdev/bdev_zone.o 00:03:45.055 CC lib/bdev/part.o 00:03:45.313 SO libspdk_fsdev.so.2.0 00:03:45.313 CC lib/bdev/scsi_nvme.o 00:03:45.313 SYMLINK libspdk_fsdev.so 00:03:45.313 LIB libspdk_event.a 00:03:45.313 SYMLINK libspdk_nvme.so 00:03:45.613 SO libspdk_event.so.14.0 00:03:45.614 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:45.614 SYMLINK libspdk_event.so 00:03:46.179 LIB libspdk_fuse_dispatcher.a 00:03:46.179 SO libspdk_fuse_dispatcher.so.1.0 00:03:46.437 SYMLINK libspdk_fuse_dispatcher.so 00:03:47.003 LIB libspdk_blob.a 00:03:47.003 SO libspdk_blob.so.12.0 00:03:47.261 SYMLINK libspdk_blob.so 00:03:47.519 CC lib/lvol/lvol.o 00:03:47.519 CC lib/blobfs/blobfs.o 00:03:47.519 CC lib/blobfs/tree.o 00:03:48.085 LIB libspdk_bdev.a 00:03:48.086 SO libspdk_bdev.so.17.0 00:03:48.344 SYMLINK libspdk_bdev.so 00:03:48.631 CC lib/nvmf/ctrlr_discovery.o 00:03:48.631 CC lib/nvmf/ctrlr_bdev.o 00:03:48.631 CC lib/nvmf/ctrlr.o 00:03:48.631 CC lib/nvmf/subsystem.o 00:03:48.631 CC lib/nbd/nbd.o 00:03:48.631 LIB libspdk_blobfs.a 00:03:48.631 CC lib/ftl/ftl_core.o 00:03:48.631 CC lib/scsi/dev.o 00:03:48.631 CC lib/ublk/ublk.o 00:03:48.631 SO libspdk_blobfs.so.11.0 00:03:48.631 LIB libspdk_lvol.a 00:03:48.631 SO libspdk_lvol.so.11.0 00:03:48.631 SYMLINK libspdk_blobfs.so 00:03:48.631 CC lib/ftl/ftl_init.o 00:03:48.631 SYMLINK libspdk_lvol.so 00:03:48.631 CC lib/nvmf/nvmf.o 00:03:48.631 CC lib/scsi/lun.o 00:03:48.890 CC lib/scsi/port.o 00:03:48.890 CC lib/ftl/ftl_layout.o 00:03:48.890 CC lib/nbd/nbd_rpc.o 00:03:49.149 CC lib/ftl/ftl_debug.o 00:03:49.149 CC lib/ublk/ublk_rpc.o 00:03:49.149 CC lib/scsi/scsi.o 00:03:49.149 LIB libspdk_nbd.a 00:03:49.149 SO libspdk_nbd.so.7.0 00:03:49.407 CC lib/nvmf/nvmf_rpc.o 00:03:49.407 CC lib/nvmf/transport.o 00:03:49.407 CC lib/nvmf/tcp.o 00:03:49.407 CC lib/scsi/scsi_bdev.o 00:03:49.407 LIB libspdk_ublk.a 00:03:49.407 SYMLINK libspdk_nbd.so 00:03:49.407 CC lib/scsi/scsi_pr.o 00:03:49.407 SO libspdk_ublk.so.3.0 00:03:49.407 CC lib/ftl/ftl_io.o 00:03:49.407 SYMLINK libspdk_ublk.so 00:03:49.407 CC lib/ftl/ftl_sb.o 00:03:49.665 CC lib/ftl/ftl_l2p.o 00:03:49.665 CC lib/scsi/scsi_rpc.o 00:03:49.665 CC lib/nvmf/stubs.o 00:03:49.923 CC lib/nvmf/mdns_server.o 00:03:49.923 CC lib/nvmf/rdma.o 00:03:49.923 CC lib/scsi/task.o 00:03:49.923 CC lib/ftl/ftl_l2p_flat.o 00:03:49.923 CC lib/nvmf/auth.o 00:03:50.181 CC lib/ftl/ftl_nv_cache.o 00:03:50.181 LIB libspdk_scsi.a 00:03:50.181 CC lib/ftl/ftl_band.o 00:03:50.181 CC lib/ftl/ftl_band_ops.o 00:03:50.181 CC lib/ftl/ftl_writer.o 00:03:50.181 SO libspdk_scsi.so.9.0 00:03:50.439 SYMLINK libspdk_scsi.so 00:03:50.439 CC lib/ftl/ftl_rq.o 00:03:50.439 CC lib/ftl/ftl_reloc.o 00:03:50.439 CC lib/ftl/ftl_l2p_cache.o 00:03:50.439 CC lib/ftl/ftl_p2l.o 00:03:50.697 CC lib/ftl/ftl_p2l_log.o 00:03:50.697 CC lib/ftl/mngt/ftl_mngt.o 00:03:50.697 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:50.955 CC lib/iscsi/conn.o 00:03:50.955 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:50.955 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:50.955 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:50.955 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:50.955 CC lib/iscsi/init_grp.o 00:03:50.955 CC lib/iscsi/iscsi.o 00:03:51.213 CC lib/iscsi/param.o 00:03:51.213 CC lib/vhost/vhost.o 00:03:51.213 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:51.213 CC lib/iscsi/portal_grp.o 00:03:51.213 CC lib/iscsi/tgt_node.o 00:03:51.471 CC lib/iscsi/iscsi_subsystem.o 00:03:51.471 CC lib/iscsi/iscsi_rpc.o 00:03:51.471 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:51.471 CC lib/iscsi/task.o 00:03:51.471 CC lib/vhost/vhost_rpc.o 00:03:51.729 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:51.729 CC lib/vhost/vhost_scsi.o 00:03:51.729 CC lib/vhost/vhost_blk.o 00:03:51.729 CC lib/vhost/rte_vhost_user.o 00:03:51.987 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:51.987 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:51.987 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:51.987 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:51.987 CC lib/ftl/utils/ftl_conf.o 00:03:52.246 LIB libspdk_nvmf.a 00:03:52.246 CC lib/ftl/utils/ftl_md.o 00:03:52.246 CC lib/ftl/utils/ftl_mempool.o 00:03:52.246 CC lib/ftl/utils/ftl_bitmap.o 00:03:52.246 SO libspdk_nvmf.so.20.0 00:03:52.506 CC lib/ftl/utils/ftl_property.o 00:03:52.506 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:52.506 LIB libspdk_iscsi.a 00:03:52.506 SYMLINK libspdk_nvmf.so 00:03:52.506 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:52.506 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:52.506 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:52.506 SO libspdk_iscsi.so.8.0 00:03:52.764 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:52.764 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:52.764 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:52.764 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:52.764 SYMLINK libspdk_iscsi.so 00:03:52.764 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:52.764 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:52.764 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:52.764 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:52.764 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:53.022 CC lib/ftl/base/ftl_base_dev.o 00:03:53.022 CC lib/ftl/base/ftl_base_bdev.o 00:03:53.022 LIB libspdk_vhost.a 00:03:53.022 CC lib/ftl/ftl_trace.o 00:03:53.022 SO libspdk_vhost.so.8.0 00:03:53.022 SYMLINK libspdk_vhost.so 00:03:53.280 LIB libspdk_ftl.a 00:03:53.538 SO libspdk_ftl.so.9.0 00:03:53.796 SYMLINK libspdk_ftl.so 00:03:54.054 CC module/env_dpdk/env_dpdk_rpc.o 00:03:54.313 CC module/accel/ioat/accel_ioat.o 00:03:54.313 CC module/sock/posix/posix.o 00:03:54.313 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:54.313 CC module/accel/error/accel_error.o 00:03:54.313 CC module/fsdev/aio/fsdev_aio.o 00:03:54.313 CC module/blob/bdev/blob_bdev.o 00:03:54.313 CC module/accel/dsa/accel_dsa.o 00:03:54.313 CC module/accel/iaa/accel_iaa.o 00:03:54.313 CC module/keyring/file/keyring.o 00:03:54.313 LIB libspdk_env_dpdk_rpc.a 00:03:54.313 SO libspdk_env_dpdk_rpc.so.6.0 00:03:54.313 SYMLINK libspdk_env_dpdk_rpc.so 00:03:54.313 CC module/accel/iaa/accel_iaa_rpc.o 00:03:54.571 CC module/keyring/file/keyring_rpc.o 00:03:54.571 CC module/accel/ioat/accel_ioat_rpc.o 00:03:54.571 LIB libspdk_scheduler_dynamic.a 00:03:54.571 CC module/accel/error/accel_error_rpc.o 00:03:54.571 SO libspdk_scheduler_dynamic.so.4.0 00:03:54.571 SYMLINK libspdk_scheduler_dynamic.so 00:03:54.571 CC module/accel/dsa/accel_dsa_rpc.o 00:03:54.571 LIB libspdk_accel_iaa.a 00:03:54.571 LIB libspdk_blob_bdev.a 00:03:54.571 LIB libspdk_keyring_file.a 00:03:54.571 LIB libspdk_accel_ioat.a 00:03:54.571 SO libspdk_blob_bdev.so.12.0 00:03:54.571 SO libspdk_accel_iaa.so.3.0 00:03:54.571 SO libspdk_keyring_file.so.2.0 00:03:54.571 LIB libspdk_accel_error.a 00:03:54.571 SO libspdk_accel_ioat.so.6.0 00:03:54.571 SO libspdk_accel_error.so.2.0 00:03:54.829 SYMLINK libspdk_keyring_file.so 00:03:54.829 SYMLINK libspdk_accel_iaa.so 00:03:54.829 SYMLINK libspdk_blob_bdev.so 00:03:54.829 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:54.829 LIB libspdk_accel_dsa.a 00:03:54.829 SYMLINK libspdk_accel_ioat.so 00:03:54.829 SYMLINK libspdk_accel_error.so 00:03:54.829 CC module/fsdev/aio/linux_aio_mgr.o 00:03:54.829 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:54.829 SO libspdk_accel_dsa.so.5.0 00:03:54.829 CC module/scheduler/gscheduler/gscheduler.o 00:03:54.829 SYMLINK libspdk_accel_dsa.so 00:03:54.829 CC module/keyring/linux/keyring.o 00:03:54.829 CC module/sock/uring/uring.o 00:03:54.829 LIB libspdk_scheduler_dpdk_governor.a 00:03:55.087 LIB libspdk_scheduler_gscheduler.a 00:03:55.087 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:55.087 LIB libspdk_fsdev_aio.a 00:03:55.087 SO libspdk_scheduler_gscheduler.so.4.0 00:03:55.087 LIB libspdk_sock_posix.a 00:03:55.087 CC module/bdev/delay/vbdev_delay.o 00:03:55.087 SO libspdk_fsdev_aio.so.1.0 00:03:55.087 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:55.087 SO libspdk_sock_posix.so.6.0 00:03:55.087 SYMLINK libspdk_scheduler_gscheduler.so 00:03:55.087 CC module/keyring/linux/keyring_rpc.o 00:03:55.087 SYMLINK libspdk_fsdev_aio.so 00:03:55.087 CC module/bdev/error/vbdev_error.o 00:03:55.087 CC module/bdev/error/vbdev_error_rpc.o 00:03:55.087 SYMLINK libspdk_sock_posix.so 00:03:55.087 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:55.087 CC module/blobfs/bdev/blobfs_bdev.o 00:03:55.087 CC module/bdev/gpt/gpt.o 00:03:55.345 LIB libspdk_keyring_linux.a 00:03:55.345 CC module/bdev/lvol/vbdev_lvol.o 00:03:55.345 CC module/bdev/malloc/bdev_malloc.o 00:03:55.345 SO libspdk_keyring_linux.so.1.0 00:03:55.345 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:55.345 SYMLINK libspdk_keyring_linux.so 00:03:55.345 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:55.345 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:55.345 LIB libspdk_bdev_error.a 00:03:55.345 CC module/bdev/gpt/vbdev_gpt.o 00:03:55.345 LIB libspdk_bdev_delay.a 00:03:55.345 SO libspdk_bdev_error.so.6.0 00:03:55.345 SO libspdk_bdev_delay.so.6.0 00:03:55.603 SYMLINK libspdk_bdev_error.so 00:03:55.603 LIB libspdk_blobfs_bdev.a 00:03:55.603 SYMLINK libspdk_bdev_delay.so 00:03:55.603 SO libspdk_blobfs_bdev.so.6.0 00:03:55.603 CC module/bdev/null/bdev_null.o 00:03:55.603 SYMLINK libspdk_blobfs_bdev.so 00:03:55.603 CC module/bdev/null/bdev_null_rpc.o 00:03:55.603 LIB libspdk_sock_uring.a 00:03:55.603 SO libspdk_sock_uring.so.5.0 00:03:55.603 LIB libspdk_bdev_malloc.a 00:03:55.603 LIB libspdk_bdev_gpt.a 00:03:55.603 SO libspdk_bdev_malloc.so.6.0 00:03:55.603 CC module/bdev/raid/bdev_raid.o 00:03:55.861 CC module/bdev/nvme/bdev_nvme.o 00:03:55.861 CC module/bdev/passthru/vbdev_passthru.o 00:03:55.861 SYMLINK libspdk_sock_uring.so 00:03:55.861 CC module/bdev/raid/bdev_raid_rpc.o 00:03:55.861 SO libspdk_bdev_gpt.so.6.0 00:03:55.861 SYMLINK libspdk_bdev_malloc.so 00:03:55.861 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:55.861 LIB libspdk_bdev_lvol.a 00:03:55.861 SYMLINK libspdk_bdev_gpt.so 00:03:55.861 CC module/bdev/raid/bdev_raid_sb.o 00:03:55.861 CC module/bdev/nvme/nvme_rpc.o 00:03:55.861 SO libspdk_bdev_lvol.so.6.0 00:03:55.861 LIB libspdk_bdev_null.a 00:03:55.861 SO libspdk_bdev_null.so.6.0 00:03:55.861 CC module/bdev/split/vbdev_split.o 00:03:55.861 SYMLINK libspdk_bdev_lvol.so 00:03:55.861 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:56.119 SYMLINK libspdk_bdev_null.so 00:03:56.119 CC module/bdev/raid/raid0.o 00:03:56.119 CC module/bdev/raid/raid1.o 00:03:56.119 CC module/bdev/nvme/bdev_mdns_client.o 00:03:56.119 CC module/bdev/split/vbdev_split_rpc.o 00:03:56.119 CC module/bdev/raid/concat.o 00:03:56.119 LIB libspdk_bdev_passthru.a 00:03:56.119 SO libspdk_bdev_passthru.so.6.0 00:03:56.377 CC module/bdev/nvme/vbdev_opal.o 00:03:56.377 SYMLINK libspdk_bdev_passthru.so 00:03:56.377 LIB libspdk_bdev_split.a 00:03:56.377 SO libspdk_bdev_split.so.6.0 00:03:56.377 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:56.377 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:56.377 CC module/bdev/uring/bdev_uring.o 00:03:56.377 SYMLINK libspdk_bdev_split.so 00:03:56.377 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:56.377 CC module/bdev/aio/bdev_aio.o 00:03:56.634 CC module/bdev/ftl/bdev_ftl.o 00:03:56.634 CC module/bdev/aio/bdev_aio_rpc.o 00:03:56.634 CC module/bdev/iscsi/bdev_iscsi.o 00:03:56.634 CC module/bdev/uring/bdev_uring_rpc.o 00:03:56.892 LIB libspdk_bdev_zone_block.a 00:03:56.892 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:56.893 LIB libspdk_bdev_raid.a 00:03:56.893 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:56.893 SO libspdk_bdev_zone_block.so.6.0 00:03:56.893 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:56.893 LIB libspdk_bdev_aio.a 00:03:56.893 SO libspdk_bdev_raid.so.6.0 00:03:56.893 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:56.893 SYMLINK libspdk_bdev_zone_block.so 00:03:56.893 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:56.893 SO libspdk_bdev_aio.so.6.0 00:03:56.893 LIB libspdk_bdev_uring.a 00:03:56.893 SO libspdk_bdev_uring.so.6.0 00:03:56.893 SYMLINK libspdk_bdev_raid.so 00:03:56.893 SYMLINK libspdk_bdev_aio.so 00:03:56.893 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:56.893 SYMLINK libspdk_bdev_uring.so 00:03:56.893 LIB libspdk_bdev_ftl.a 00:03:57.151 SO libspdk_bdev_ftl.so.6.0 00:03:57.151 LIB libspdk_bdev_iscsi.a 00:03:57.151 SYMLINK libspdk_bdev_ftl.so 00:03:57.151 SO libspdk_bdev_iscsi.so.6.0 00:03:57.151 SYMLINK libspdk_bdev_iscsi.so 00:03:57.408 LIB libspdk_bdev_virtio.a 00:03:57.408 SO libspdk_bdev_virtio.so.6.0 00:03:57.408 SYMLINK libspdk_bdev_virtio.so 00:03:58.782 LIB libspdk_bdev_nvme.a 00:03:58.782 SO libspdk_bdev_nvme.so.7.1 00:03:58.782 SYMLINK libspdk_bdev_nvme.so 00:03:59.348 CC module/event/subsystems/fsdev/fsdev.o 00:03:59.348 CC module/event/subsystems/scheduler/scheduler.o 00:03:59.348 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:59.348 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:59.348 CC module/event/subsystems/iobuf/iobuf.o 00:03:59.348 CC module/event/subsystems/sock/sock.o 00:03:59.348 CC module/event/subsystems/vmd/vmd.o 00:03:59.348 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:59.348 CC module/event/subsystems/keyring/keyring.o 00:03:59.348 LIB libspdk_event_fsdev.a 00:03:59.348 LIB libspdk_event_iobuf.a 00:03:59.348 LIB libspdk_event_vhost_blk.a 00:03:59.348 LIB libspdk_event_scheduler.a 00:03:59.348 SO libspdk_event_fsdev.so.1.0 00:03:59.348 LIB libspdk_event_vmd.a 00:03:59.348 LIB libspdk_event_sock.a 00:03:59.348 LIB libspdk_event_keyring.a 00:03:59.348 SO libspdk_event_vhost_blk.so.3.0 00:03:59.348 SO libspdk_event_iobuf.so.3.0 00:03:59.348 SO libspdk_event_scheduler.so.4.0 00:03:59.348 SO libspdk_event_keyring.so.1.0 00:03:59.348 SO libspdk_event_sock.so.5.0 00:03:59.348 SO libspdk_event_vmd.so.6.0 00:03:59.611 SYMLINK libspdk_event_fsdev.so 00:03:59.611 SYMLINK libspdk_event_vhost_blk.so 00:03:59.611 SYMLINK libspdk_event_scheduler.so 00:03:59.611 SYMLINK libspdk_event_iobuf.so 00:03:59.611 SYMLINK libspdk_event_sock.so 00:03:59.611 SYMLINK libspdk_event_keyring.so 00:03:59.611 SYMLINK libspdk_event_vmd.so 00:03:59.878 CC module/event/subsystems/accel/accel.o 00:03:59.878 LIB libspdk_event_accel.a 00:04:00.136 SO libspdk_event_accel.so.6.0 00:04:00.136 SYMLINK libspdk_event_accel.so 00:04:00.394 CC module/event/subsystems/bdev/bdev.o 00:04:00.652 LIB libspdk_event_bdev.a 00:04:00.652 SO libspdk_event_bdev.so.6.0 00:04:00.652 SYMLINK libspdk_event_bdev.so 00:04:00.910 CC module/event/subsystems/scsi/scsi.o 00:04:00.910 CC module/event/subsystems/ublk/ublk.o 00:04:00.910 CC module/event/subsystems/nbd/nbd.o 00:04:00.910 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:00.910 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:01.168 LIB libspdk_event_nbd.a 00:04:01.168 LIB libspdk_event_ublk.a 00:04:01.168 LIB libspdk_event_scsi.a 00:04:01.168 SO libspdk_event_nbd.so.6.0 00:04:01.168 SO libspdk_event_ublk.so.3.0 00:04:01.168 SO libspdk_event_scsi.so.6.0 00:04:01.168 SYMLINK libspdk_event_nbd.so 00:04:01.168 SYMLINK libspdk_event_scsi.so 00:04:01.168 SYMLINK libspdk_event_ublk.so 00:04:01.168 LIB libspdk_event_nvmf.a 00:04:01.426 SO libspdk_event_nvmf.so.6.0 00:04:01.426 SYMLINK libspdk_event_nvmf.so 00:04:01.426 CC module/event/subsystems/iscsi/iscsi.o 00:04:01.426 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:01.685 LIB libspdk_event_vhost_scsi.a 00:04:01.685 LIB libspdk_event_iscsi.a 00:04:01.685 SO libspdk_event_vhost_scsi.so.3.0 00:04:01.685 SO libspdk_event_iscsi.so.6.0 00:04:01.685 SYMLINK libspdk_event_vhost_scsi.so 00:04:01.943 SYMLINK libspdk_event_iscsi.so 00:04:01.943 SO libspdk.so.6.0 00:04:01.943 SYMLINK libspdk.so 00:04:02.199 CC app/trace_record/trace_record.o 00:04:02.200 CXX app/trace/trace.o 00:04:02.457 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:02.457 CC app/iscsi_tgt/iscsi_tgt.o 00:04:02.457 CC app/nvmf_tgt/nvmf_main.o 00:04:02.457 CC examples/util/zipf/zipf.o 00:04:02.457 CC examples/ioat/perf/perf.o 00:04:02.457 CC test/thread/poller_perf/poller_perf.o 00:04:02.457 CC test/dma/test_dma/test_dma.o 00:04:02.457 CC test/app/bdev_svc/bdev_svc.o 00:04:02.714 LINK interrupt_tgt 00:04:02.714 LINK zipf 00:04:02.714 LINK nvmf_tgt 00:04:02.715 LINK iscsi_tgt 00:04:02.715 LINK poller_perf 00:04:02.715 LINK bdev_svc 00:04:02.715 LINK spdk_trace_record 00:04:02.715 LINK ioat_perf 00:04:02.715 LINK spdk_trace 00:04:02.973 TEST_HEADER include/spdk/accel.h 00:04:02.973 TEST_HEADER include/spdk/accel_module.h 00:04:02.973 TEST_HEADER include/spdk/assert.h 00:04:02.973 TEST_HEADER include/spdk/barrier.h 00:04:02.973 TEST_HEADER include/spdk/base64.h 00:04:02.973 TEST_HEADER include/spdk/bdev.h 00:04:02.973 TEST_HEADER include/spdk/bdev_module.h 00:04:02.973 TEST_HEADER include/spdk/bdev_zone.h 00:04:02.973 TEST_HEADER include/spdk/bit_array.h 00:04:02.973 TEST_HEADER include/spdk/bit_pool.h 00:04:02.973 TEST_HEADER include/spdk/blob_bdev.h 00:04:02.973 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:02.973 TEST_HEADER include/spdk/blobfs.h 00:04:02.973 TEST_HEADER include/spdk/blob.h 00:04:02.973 TEST_HEADER include/spdk/conf.h 00:04:02.973 TEST_HEADER include/spdk/config.h 00:04:02.973 TEST_HEADER include/spdk/cpuset.h 00:04:02.973 TEST_HEADER include/spdk/crc16.h 00:04:02.973 TEST_HEADER include/spdk/crc32.h 00:04:02.973 TEST_HEADER include/spdk/crc64.h 00:04:02.973 TEST_HEADER include/spdk/dif.h 00:04:02.973 TEST_HEADER include/spdk/dma.h 00:04:02.973 TEST_HEADER include/spdk/endian.h 00:04:02.973 TEST_HEADER include/spdk/env_dpdk.h 00:04:02.973 TEST_HEADER include/spdk/env.h 00:04:02.973 TEST_HEADER include/spdk/event.h 00:04:02.973 TEST_HEADER include/spdk/fd_group.h 00:04:02.973 TEST_HEADER include/spdk/fd.h 00:04:02.973 TEST_HEADER include/spdk/file.h 00:04:02.973 TEST_HEADER include/spdk/fsdev.h 00:04:02.973 TEST_HEADER include/spdk/fsdev_module.h 00:04:02.973 TEST_HEADER include/spdk/ftl.h 00:04:02.973 TEST_HEADER include/spdk/gpt_spec.h 00:04:02.973 TEST_HEADER include/spdk/hexlify.h 00:04:02.973 TEST_HEADER include/spdk/histogram_data.h 00:04:02.973 TEST_HEADER include/spdk/idxd.h 00:04:02.973 TEST_HEADER include/spdk/idxd_spec.h 00:04:02.973 TEST_HEADER include/spdk/init.h 00:04:02.973 TEST_HEADER include/spdk/ioat.h 00:04:02.973 TEST_HEADER include/spdk/ioat_spec.h 00:04:02.973 TEST_HEADER include/spdk/iscsi_spec.h 00:04:02.973 TEST_HEADER include/spdk/json.h 00:04:02.973 TEST_HEADER include/spdk/jsonrpc.h 00:04:02.973 TEST_HEADER include/spdk/keyring.h 00:04:02.973 TEST_HEADER include/spdk/keyring_module.h 00:04:02.973 TEST_HEADER include/spdk/likely.h 00:04:02.973 TEST_HEADER include/spdk/log.h 00:04:02.973 TEST_HEADER include/spdk/lvol.h 00:04:02.973 TEST_HEADER include/spdk/md5.h 00:04:02.973 TEST_HEADER include/spdk/memory.h 00:04:02.973 TEST_HEADER include/spdk/mmio.h 00:04:02.973 TEST_HEADER include/spdk/nbd.h 00:04:02.973 TEST_HEADER include/spdk/net.h 00:04:02.973 CC test/app/histogram_perf/histogram_perf.o 00:04:02.973 TEST_HEADER include/spdk/notify.h 00:04:02.973 TEST_HEADER include/spdk/nvme.h 00:04:02.973 TEST_HEADER include/spdk/nvme_intel.h 00:04:02.973 CC examples/ioat/verify/verify.o 00:04:02.973 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:02.973 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:02.973 TEST_HEADER include/spdk/nvme_spec.h 00:04:02.973 TEST_HEADER include/spdk/nvme_zns.h 00:04:02.973 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:02.973 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:02.973 TEST_HEADER include/spdk/nvmf.h 00:04:02.973 TEST_HEADER include/spdk/nvmf_spec.h 00:04:02.973 TEST_HEADER include/spdk/nvmf_transport.h 00:04:02.973 TEST_HEADER include/spdk/opal.h 00:04:02.973 TEST_HEADER include/spdk/opal_spec.h 00:04:02.973 TEST_HEADER include/spdk/pci_ids.h 00:04:02.973 TEST_HEADER include/spdk/pipe.h 00:04:02.973 TEST_HEADER include/spdk/queue.h 00:04:02.973 TEST_HEADER include/spdk/reduce.h 00:04:03.231 TEST_HEADER include/spdk/rpc.h 00:04:03.231 TEST_HEADER include/spdk/scheduler.h 00:04:03.231 TEST_HEADER include/spdk/scsi.h 00:04:03.231 TEST_HEADER include/spdk/scsi_spec.h 00:04:03.231 TEST_HEADER include/spdk/sock.h 00:04:03.231 TEST_HEADER include/spdk/stdinc.h 00:04:03.231 CC app/spdk_lspci/spdk_lspci.o 00:04:03.231 TEST_HEADER include/spdk/string.h 00:04:03.231 TEST_HEADER include/spdk/thread.h 00:04:03.231 CC app/spdk_nvme_perf/perf.o 00:04:03.231 LINK test_dma 00:04:03.231 TEST_HEADER include/spdk/trace.h 00:04:03.231 TEST_HEADER include/spdk/trace_parser.h 00:04:03.231 TEST_HEADER include/spdk/tree.h 00:04:03.231 TEST_HEADER include/spdk/ublk.h 00:04:03.231 TEST_HEADER include/spdk/util.h 00:04:03.231 TEST_HEADER include/spdk/uuid.h 00:04:03.231 TEST_HEADER include/spdk/version.h 00:04:03.231 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:03.231 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:03.231 TEST_HEADER include/spdk/vhost.h 00:04:03.231 TEST_HEADER include/spdk/vmd.h 00:04:03.231 TEST_HEADER include/spdk/xor.h 00:04:03.231 TEST_HEADER include/spdk/zipf.h 00:04:03.231 CC app/spdk_nvme_identify/identify.o 00:04:03.231 CXX test/cpp_headers/accel.o 00:04:03.231 CC app/spdk_tgt/spdk_tgt.o 00:04:03.231 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:03.231 CC test/env/mem_callbacks/mem_callbacks.o 00:04:03.231 LINK histogram_perf 00:04:03.231 LINK spdk_lspci 00:04:03.231 LINK verify 00:04:03.231 CXX test/cpp_headers/accel_module.o 00:04:03.489 CXX test/cpp_headers/assert.o 00:04:03.489 LINK spdk_tgt 00:04:03.489 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:03.489 CXX test/cpp_headers/barrier.o 00:04:03.489 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:03.489 LINK nvme_fuzz 00:04:03.747 CC app/spdk_nvme_discover/discovery_aer.o 00:04:03.747 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:03.747 CC test/env/vtophys/vtophys.o 00:04:03.747 CC examples/thread/thread/thread_ex.o 00:04:03.747 CXX test/cpp_headers/base64.o 00:04:03.747 LINK spdk_nvme_discover 00:04:03.747 LINK mem_callbacks 00:04:04.005 LINK vtophys 00:04:04.005 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:04.005 CXX test/cpp_headers/bdev.o 00:04:04.005 LINK thread 00:04:04.005 LINK spdk_nvme_identify 00:04:04.005 LINK spdk_nvme_perf 00:04:04.005 LINK env_dpdk_post_init 00:04:04.005 CC test/app/jsoncat/jsoncat.o 00:04:04.005 CXX test/cpp_headers/bdev_module.o 00:04:04.263 LINK vhost_fuzz 00:04:04.263 CC test/app/stub/stub.o 00:04:04.263 CC app/spdk_top/spdk_top.o 00:04:04.263 LINK jsoncat 00:04:04.263 CC test/env/pci/pci_ut.o 00:04:04.263 CC test/env/memory/memory_ut.o 00:04:04.263 LINK stub 00:04:04.263 CXX test/cpp_headers/bdev_zone.o 00:04:04.521 CC app/vhost/vhost.o 00:04:04.521 CC examples/sock/hello_world/hello_sock.o 00:04:04.521 CC app/spdk_dd/spdk_dd.o 00:04:04.521 CXX test/cpp_headers/bit_array.o 00:04:04.844 LINK vhost 00:04:04.844 CC app/fio/nvme/fio_plugin.o 00:04:04.844 LINK hello_sock 00:04:04.844 CC test/event/event_perf/event_perf.o 00:04:04.844 LINK pci_ut 00:04:04.844 CXX test/cpp_headers/bit_pool.o 00:04:04.844 CXX test/cpp_headers/blob_bdev.o 00:04:04.844 LINK event_perf 00:04:05.101 LINK spdk_dd 00:04:05.101 LINK spdk_top 00:04:05.101 CC examples/vmd/lsvmd/lsvmd.o 00:04:05.101 CXX test/cpp_headers/blobfs_bdev.o 00:04:05.360 CC test/event/reactor/reactor.o 00:04:05.360 LINK spdk_nvme 00:04:05.360 LINK iscsi_fuzz 00:04:05.360 CC examples/idxd/perf/perf.o 00:04:05.360 LINK lsvmd 00:04:05.360 CC test/event/reactor_perf/reactor_perf.o 00:04:05.360 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:05.360 CXX test/cpp_headers/blobfs.o 00:04:05.360 CC test/event/app_repeat/app_repeat.o 00:04:05.360 LINK reactor 00:04:05.360 CXX test/cpp_headers/blob.o 00:04:05.360 LINK reactor_perf 00:04:05.618 CC app/fio/bdev/fio_plugin.o 00:04:05.618 CC examples/vmd/led/led.o 00:04:05.618 LINK memory_ut 00:04:05.618 LINK app_repeat 00:04:05.618 CXX test/cpp_headers/conf.o 00:04:05.618 LINK idxd_perf 00:04:05.618 LINK hello_fsdev 00:04:05.618 CC test/rpc_client/rpc_client_test.o 00:04:05.876 LINK led 00:04:05.876 CC test/nvme/aer/aer.o 00:04:05.876 CXX test/cpp_headers/config.o 00:04:05.876 CC examples/accel/perf/accel_perf.o 00:04:05.876 CXX test/cpp_headers/cpuset.o 00:04:05.876 CXX test/cpp_headers/crc16.o 00:04:05.876 LINK rpc_client_test 00:04:05.876 CC test/event/scheduler/scheduler.o 00:04:06.135 LINK spdk_bdev 00:04:06.135 CC examples/nvme/hello_world/hello_world.o 00:04:06.135 CC examples/blob/cli/blobcli.o 00:04:06.135 CC examples/blob/hello_world/hello_blob.o 00:04:06.135 CXX test/cpp_headers/crc32.o 00:04:06.135 LINK aer 00:04:06.135 CC examples/nvme/reconnect/reconnect.o 00:04:06.135 LINK scheduler 00:04:06.394 CXX test/cpp_headers/crc64.o 00:04:06.394 LINK hello_world 00:04:06.394 CC test/accel/dif/dif.o 00:04:06.394 LINK hello_blob 00:04:06.394 CC test/nvme/reset/reset.o 00:04:06.394 LINK accel_perf 00:04:06.394 CC test/blobfs/mkfs/mkfs.o 00:04:06.394 CXX test/cpp_headers/dif.o 00:04:06.652 CC test/nvme/sgl/sgl.o 00:04:06.652 CC test/nvme/e2edp/nvme_dp.o 00:04:06.652 LINK reconnect 00:04:06.652 LINK blobcli 00:04:06.652 CXX test/cpp_headers/dma.o 00:04:06.652 LINK reset 00:04:06.652 LINK mkfs 00:04:06.652 CC test/nvme/overhead/overhead.o 00:04:06.652 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:06.652 CXX test/cpp_headers/endian.o 00:04:06.910 CXX test/cpp_headers/env_dpdk.o 00:04:06.910 CXX test/cpp_headers/env.o 00:04:06.910 LINK sgl 00:04:06.910 CXX test/cpp_headers/event.o 00:04:06.910 LINK nvme_dp 00:04:06.910 CXX test/cpp_headers/fd_group.o 00:04:06.910 LINK overhead 00:04:06.910 CXX test/cpp_headers/fd.o 00:04:06.910 CXX test/cpp_headers/file.o 00:04:06.910 LINK dif 00:04:06.910 CXX test/cpp_headers/fsdev.o 00:04:07.169 CC test/lvol/esnap/esnap.o 00:04:07.169 CC examples/nvme/arbitration/arbitration.o 00:04:07.169 LINK nvme_manage 00:04:07.169 CC examples/nvme/hotplug/hotplug.o 00:04:07.169 CXX test/cpp_headers/fsdev_module.o 00:04:07.169 CC test/nvme/err_injection/err_injection.o 00:04:07.169 CXX test/cpp_headers/ftl.o 00:04:07.169 CC test/nvme/startup/startup.o 00:04:07.169 CC examples/bdev/hello_world/hello_bdev.o 00:04:07.169 CC test/nvme/reserve/reserve.o 00:04:07.427 CXX test/cpp_headers/gpt_spec.o 00:04:07.427 LINK arbitration 00:04:07.427 LINK err_injection 00:04:07.427 LINK hotplug 00:04:07.427 LINK startup 00:04:07.427 LINK hello_bdev 00:04:07.427 CXX test/cpp_headers/hexlify.o 00:04:07.427 LINK reserve 00:04:07.685 CC test/nvme/simple_copy/simple_copy.o 00:04:07.685 CC test/bdev/bdevio/bdevio.o 00:04:07.685 CC test/nvme/connect_stress/connect_stress.o 00:04:07.685 CC test/nvme/boot_partition/boot_partition.o 00:04:07.685 CXX test/cpp_headers/histogram_data.o 00:04:07.685 CXX test/cpp_headers/idxd.o 00:04:07.685 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:07.685 CC examples/bdev/bdevperf/bdevperf.o 00:04:07.685 CC test/nvme/compliance/nvme_compliance.o 00:04:07.943 LINK simple_copy 00:04:07.943 LINK boot_partition 00:04:07.943 LINK connect_stress 00:04:07.943 CXX test/cpp_headers/idxd_spec.o 00:04:07.943 LINK cmb_copy 00:04:07.943 CXX test/cpp_headers/init.o 00:04:07.943 CC test/nvme/fused_ordering/fused_ordering.o 00:04:08.202 LINK bdevio 00:04:08.202 LINK nvme_compliance 00:04:08.202 CXX test/cpp_headers/ioat.o 00:04:08.202 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:08.202 CC examples/nvme/abort/abort.o 00:04:08.202 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:08.202 CC test/nvme/fdp/fdp.o 00:04:08.202 LINK fused_ordering 00:04:08.202 CXX test/cpp_headers/ioat_spec.o 00:04:08.202 CXX test/cpp_headers/iscsi_spec.o 00:04:08.460 CXX test/cpp_headers/json.o 00:04:08.460 LINK doorbell_aers 00:04:08.460 LINK pmr_persistence 00:04:08.460 CXX test/cpp_headers/jsonrpc.o 00:04:08.460 CC test/nvme/cuse/cuse.o 00:04:08.460 CXX test/cpp_headers/keyring.o 00:04:08.460 CXX test/cpp_headers/keyring_module.o 00:04:08.460 CXX test/cpp_headers/likely.o 00:04:08.460 LINK fdp 00:04:08.460 CXX test/cpp_headers/log.o 00:04:08.718 LINK abort 00:04:08.718 LINK bdevperf 00:04:08.718 CXX test/cpp_headers/lvol.o 00:04:08.718 CXX test/cpp_headers/md5.o 00:04:08.718 CXX test/cpp_headers/memory.o 00:04:08.718 CXX test/cpp_headers/mmio.o 00:04:08.718 CXX test/cpp_headers/nbd.o 00:04:08.718 CXX test/cpp_headers/net.o 00:04:08.718 CXX test/cpp_headers/notify.o 00:04:08.718 CXX test/cpp_headers/nvme.o 00:04:08.718 CXX test/cpp_headers/nvme_intel.o 00:04:08.976 CXX test/cpp_headers/nvme_ocssd.o 00:04:08.976 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:08.976 CXX test/cpp_headers/nvme_spec.o 00:04:08.976 CXX test/cpp_headers/nvme_zns.o 00:04:08.976 CXX test/cpp_headers/nvmf_cmd.o 00:04:08.976 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:08.976 CXX test/cpp_headers/nvmf.o 00:04:08.976 CC examples/nvmf/nvmf/nvmf.o 00:04:08.976 CXX test/cpp_headers/nvmf_spec.o 00:04:08.976 CXX test/cpp_headers/nvmf_transport.o 00:04:09.234 CXX test/cpp_headers/opal.o 00:04:09.234 CXX test/cpp_headers/opal_spec.o 00:04:09.234 CXX test/cpp_headers/pci_ids.o 00:04:09.234 CXX test/cpp_headers/pipe.o 00:04:09.234 CXX test/cpp_headers/queue.o 00:04:09.234 CXX test/cpp_headers/reduce.o 00:04:09.234 CXX test/cpp_headers/rpc.o 00:04:09.234 CXX test/cpp_headers/scheduler.o 00:04:09.234 CXX test/cpp_headers/scsi.o 00:04:09.234 CXX test/cpp_headers/scsi_spec.o 00:04:09.234 CXX test/cpp_headers/sock.o 00:04:09.234 CXX test/cpp_headers/stdinc.o 00:04:09.492 LINK nvmf 00:04:09.492 CXX test/cpp_headers/string.o 00:04:09.492 CXX test/cpp_headers/thread.o 00:04:09.492 CXX test/cpp_headers/trace.o 00:04:09.492 CXX test/cpp_headers/trace_parser.o 00:04:09.492 CXX test/cpp_headers/tree.o 00:04:09.492 CXX test/cpp_headers/ublk.o 00:04:09.492 CXX test/cpp_headers/util.o 00:04:09.492 CXX test/cpp_headers/uuid.o 00:04:09.492 CXX test/cpp_headers/version.o 00:04:09.492 CXX test/cpp_headers/vfio_user_pci.o 00:04:09.750 CXX test/cpp_headers/vfio_user_spec.o 00:04:09.750 CXX test/cpp_headers/vhost.o 00:04:09.750 CXX test/cpp_headers/vmd.o 00:04:09.750 CXX test/cpp_headers/xor.o 00:04:09.750 CXX test/cpp_headers/zipf.o 00:04:10.008 LINK cuse 00:04:12.540 LINK esnap 00:04:13.107 00:04:13.108 real 1m35.174s 00:04:13.108 user 8m32.303s 00:04:13.108 sys 1m45.522s 00:04:13.108 ************************************ 00:04:13.108 END TEST make 00:04:13.108 ************************************ 00:04:13.108 13:45:05 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:13.108 13:45:05 make -- common/autotest_common.sh@10 -- $ set +x 00:04:13.108 13:45:05 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:13.108 13:45:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:13.108 13:45:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:13.108 13:45:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.108 13:45:05 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:13.108 13:45:05 -- pm/common@44 -- $ pid=5304 00:04:13.108 13:45:05 -- pm/common@50 -- $ kill -TERM 5304 00:04:13.108 13:45:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.108 13:45:05 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:13.108 13:45:05 -- pm/common@44 -- $ pid=5306 00:04:13.108 13:45:05 -- pm/common@50 -- $ kill -TERM 5306 00:04:13.108 13:45:05 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:13.108 13:45:05 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:13.108 13:45:05 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:13.108 13:45:05 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:13.108 13:45:05 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:13.108 13:45:06 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:13.108 13:45:06 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.108 13:45:06 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.108 13:45:06 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.108 13:45:06 -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.108 13:45:06 -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.108 13:45:06 -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.108 13:45:06 -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.108 13:45:06 -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.108 13:45:06 -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.108 13:45:06 -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.108 13:45:06 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.108 13:45:06 -- scripts/common.sh@344 -- # case "$op" in 00:04:13.108 13:45:06 -- scripts/common.sh@345 -- # : 1 00:04:13.108 13:45:06 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.108 13:45:06 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.108 13:45:06 -- scripts/common.sh@365 -- # decimal 1 00:04:13.108 13:45:06 -- scripts/common.sh@353 -- # local d=1 00:04:13.108 13:45:06 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.108 13:45:06 -- scripts/common.sh@355 -- # echo 1 00:04:13.108 13:45:06 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.108 13:45:06 -- scripts/common.sh@366 -- # decimal 2 00:04:13.108 13:45:06 -- scripts/common.sh@353 -- # local d=2 00:04:13.108 13:45:06 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.108 13:45:06 -- scripts/common.sh@355 -- # echo 2 00:04:13.108 13:45:06 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.108 13:45:06 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.108 13:45:06 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.108 13:45:06 -- scripts/common.sh@368 -- # return 0 00:04:13.108 13:45:06 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.108 13:45:06 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:13.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.108 --rc genhtml_branch_coverage=1 00:04:13.108 --rc genhtml_function_coverage=1 00:04:13.108 --rc genhtml_legend=1 00:04:13.108 --rc geninfo_all_blocks=1 00:04:13.108 --rc geninfo_unexecuted_blocks=1 00:04:13.108 00:04:13.108 ' 00:04:13.108 13:45:06 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:13.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.108 --rc genhtml_branch_coverage=1 00:04:13.108 --rc genhtml_function_coverage=1 00:04:13.108 --rc genhtml_legend=1 00:04:13.108 --rc geninfo_all_blocks=1 00:04:13.108 --rc geninfo_unexecuted_blocks=1 00:04:13.108 00:04:13.108 ' 00:04:13.108 13:45:06 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:13.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.108 --rc genhtml_branch_coverage=1 00:04:13.108 --rc genhtml_function_coverage=1 00:04:13.108 --rc genhtml_legend=1 00:04:13.108 --rc geninfo_all_blocks=1 00:04:13.108 --rc geninfo_unexecuted_blocks=1 00:04:13.108 00:04:13.108 ' 00:04:13.108 13:45:06 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:13.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.108 --rc genhtml_branch_coverage=1 00:04:13.108 --rc genhtml_function_coverage=1 00:04:13.108 --rc genhtml_legend=1 00:04:13.108 --rc geninfo_all_blocks=1 00:04:13.108 --rc geninfo_unexecuted_blocks=1 00:04:13.108 00:04:13.108 ' 00:04:13.108 13:45:06 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:13.108 13:45:06 -- nvmf/common.sh@7 -- # uname -s 00:04:13.108 13:45:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:13.108 13:45:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:13.108 13:45:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:13.108 13:45:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:13.108 13:45:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:13.108 13:45:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:13.108 13:45:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:13.108 13:45:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:13.108 13:45:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:13.108 13:45:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:13.108 13:45:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:04:13.108 13:45:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:04:13.108 13:45:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:13.108 13:45:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:13.108 13:45:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:13.108 13:45:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:13.108 13:45:06 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:13.108 13:45:06 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:13.108 13:45:06 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:13.108 13:45:06 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:13.108 13:45:06 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:13.108 13:45:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.108 13:45:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.108 13:45:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.108 13:45:06 -- paths/export.sh@5 -- # export PATH 00:04:13.108 13:45:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.108 13:45:06 -- nvmf/common.sh@51 -- # : 0 00:04:13.108 13:45:06 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:13.108 13:45:06 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:13.108 13:45:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:13.108 13:45:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:13.108 13:45:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:13.108 13:45:06 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:13.108 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:13.108 13:45:06 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:13.108 13:45:06 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:13.108 13:45:06 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:13.108 13:45:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:13.108 13:45:06 -- spdk/autotest.sh@32 -- # uname -s 00:04:13.108 13:45:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:13.108 13:45:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:13.108 13:45:06 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:13.367 13:45:06 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:13.367 13:45:06 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:13.367 13:45:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:13.367 13:45:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:13.367 13:45:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:13.367 13:45:06 -- spdk/autotest.sh@48 -- # udevadm_pid=55650 00:04:13.367 13:45:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:13.367 13:45:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:13.367 13:45:06 -- pm/common@17 -- # local monitor 00:04:13.367 13:45:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.367 13:45:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.367 13:45:06 -- pm/common@25 -- # sleep 1 00:04:13.367 13:45:06 -- pm/common@21 -- # date +%s 00:04:13.367 13:45:06 -- pm/common@21 -- # date +%s 00:04:13.367 13:45:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733924706 00:04:13.367 13:45:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733924706 00:04:13.367 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733924706_collect-cpu-load.pm.log 00:04:13.367 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733924706_collect-vmstat.pm.log 00:04:14.302 13:45:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:14.302 13:45:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:14.302 13:45:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.302 13:45:07 -- common/autotest_common.sh@10 -- # set +x 00:04:14.302 13:45:07 -- spdk/autotest.sh@59 -- # create_test_list 00:04:14.302 13:45:07 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:14.302 13:45:07 -- common/autotest_common.sh@10 -- # set +x 00:04:14.302 13:45:07 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:14.302 13:45:07 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:14.302 13:45:07 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:14.302 13:45:07 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:14.302 13:45:07 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:14.302 13:45:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:14.302 13:45:07 -- common/autotest_common.sh@1457 -- # uname 00:04:14.302 13:45:07 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:14.302 13:45:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:14.302 13:45:07 -- common/autotest_common.sh@1477 -- # uname 00:04:14.302 13:45:07 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:14.302 13:45:07 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:14.303 13:45:07 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:14.561 lcov: LCOV version 1.15 00:04:14.561 13:45:07 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:32.645 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:32.645 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:47.635 13:45:39 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:47.635 13:45:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:47.635 13:45:39 -- common/autotest_common.sh@10 -- # set +x 00:04:47.635 13:45:39 -- spdk/autotest.sh@78 -- # rm -f 00:04:47.635 13:45:39 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:47.635 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.635 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:47.635 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:47.635 13:45:40 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:47.635 13:45:40 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:47.635 13:45:40 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:47.635 13:45:40 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:47.635 13:45:40 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:47.635 13:45:40 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:47.635 13:45:40 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:47.635 13:45:40 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:47.635 13:45:40 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:47.635 13:45:40 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:47.635 13:45:40 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:47.635 13:45:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:47.635 13:45:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:47.635 13:45:40 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:47.635 13:45:40 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:47.635 13:45:40 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:47.635 13:45:40 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:47.635 13:45:40 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:47.635 13:45:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:47.635 13:45:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:47.635 13:45:40 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:47.635 13:45:40 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:04:47.635 13:45:40 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:47.635 13:45:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:47.635 13:45:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:47.635 13:45:40 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:47.635 13:45:40 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:04:47.635 13:45:40 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:47.635 13:45:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:47.635 13:45:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:47.635 13:45:40 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:47.635 13:45:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:47.635 13:45:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:47.635 13:45:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:47.635 13:45:40 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:47.635 13:45:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:47.635 No valid GPT data, bailing 00:04:47.635 13:45:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:47.635 13:45:40 -- scripts/common.sh@394 -- # pt= 00:04:47.635 13:45:40 -- scripts/common.sh@395 -- # return 1 00:04:47.635 13:45:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:47.635 1+0 records in 00:04:47.635 1+0 records out 00:04:47.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00592192 s, 177 MB/s 00:04:47.635 13:45:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:47.635 13:45:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:47.635 13:45:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:47.635 13:45:40 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:47.635 13:45:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:47.635 No valid GPT data, bailing 00:04:47.635 13:45:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:47.635 13:45:40 -- scripts/common.sh@394 -- # pt= 00:04:47.635 13:45:40 -- scripts/common.sh@395 -- # return 1 00:04:47.635 13:45:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:47.635 1+0 records in 00:04:47.635 1+0 records out 00:04:47.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00440921 s, 238 MB/s 00:04:47.635 13:45:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:47.635 13:45:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:47.635 13:45:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:47.635 13:45:40 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:47.635 13:45:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:47.635 No valid GPT data, bailing 00:04:47.635 13:45:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:47.635 13:45:40 -- scripts/common.sh@394 -- # pt= 00:04:47.635 13:45:40 -- scripts/common.sh@395 -- # return 1 00:04:47.635 13:45:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:47.635 1+0 records in 00:04:47.635 1+0 records out 00:04:47.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00411382 s, 255 MB/s 00:04:47.635 13:45:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:47.635 13:45:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:47.635 13:45:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:47.635 13:45:40 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:47.635 13:45:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:47.635 No valid GPT data, bailing 00:04:47.635 13:45:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:47.635 13:45:40 -- scripts/common.sh@394 -- # pt= 00:04:47.635 13:45:40 -- scripts/common.sh@395 -- # return 1 00:04:47.635 13:45:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:47.635 1+0 records in 00:04:47.635 1+0 records out 00:04:47.636 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00443168 s, 237 MB/s 00:04:47.636 13:45:40 -- spdk/autotest.sh@105 -- # sync 00:04:47.893 13:45:40 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:47.893 13:45:40 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:47.893 13:45:40 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:49.794 13:45:42 -- spdk/autotest.sh@111 -- # uname -s 00:04:49.794 13:45:42 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:49.794 13:45:42 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:49.794 13:45:42 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:50.359 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:50.359 Hugepages 00:04:50.359 node hugesize free / total 00:04:50.359 node0 1048576kB 0 / 0 00:04:50.359 node0 2048kB 0 / 0 00:04:50.359 00:04:50.359 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:50.617 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:50.617 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:50.617 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:50.617 13:45:43 -- spdk/autotest.sh@117 -- # uname -s 00:04:50.617 13:45:43 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:50.618 13:45:43 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:50.618 13:45:43 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:51.554 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:51.554 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:51.554 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:51.554 13:45:44 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:52.496 13:45:45 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:52.496 13:45:45 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:52.496 13:45:45 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:52.496 13:45:45 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:52.496 13:45:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:52.496 13:45:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:52.496 13:45:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:52.496 13:45:45 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:52.496 13:45:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:52.754 13:45:45 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:52.754 13:45:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:52.754 13:45:45 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:53.013 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:53.013 Waiting for block devices as requested 00:04:53.013 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:53.272 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:53.272 13:45:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:53.272 13:45:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:53.272 13:45:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:53.272 13:45:46 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:53.272 13:45:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:53.272 13:45:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:53.272 13:45:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:53.272 13:45:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:53.272 13:45:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:53.272 13:45:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:53.272 13:45:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:53.272 13:45:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:53.272 13:45:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:53.272 13:45:46 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:53.272 13:45:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:53.272 13:45:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:53.272 13:45:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:53.272 13:45:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:53.272 13:45:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:53.273 13:45:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:53.273 13:45:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:53.273 13:45:46 -- common/autotest_common.sh@1543 -- # continue 00:04:53.273 13:45:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:53.273 13:45:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:53.273 13:45:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:53.273 13:45:46 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:53.273 13:45:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:53.273 13:45:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:53.273 13:45:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:53.273 13:45:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:53.273 13:45:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:53.273 13:45:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:53.273 13:45:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:53.273 13:45:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:53.273 13:45:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:53.273 13:45:46 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:53.273 13:45:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:53.273 13:45:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:53.273 13:45:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:53.273 13:45:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:53.273 13:45:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:53.273 13:45:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:53.273 13:45:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:53.273 13:45:46 -- common/autotest_common.sh@1543 -- # continue 00:04:53.273 13:45:46 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:53.273 13:45:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:53.273 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:04:53.273 13:45:46 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:53.273 13:45:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.273 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:04:53.273 13:45:46 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:54.208 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:54.208 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.208 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.208 13:45:47 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:54.208 13:45:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:54.208 13:45:47 -- common/autotest_common.sh@10 -- # set +x 00:04:54.208 13:45:47 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:54.208 13:45:47 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:54.208 13:45:47 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:54.208 13:45:47 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:54.208 13:45:47 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:54.208 13:45:47 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:54.208 13:45:47 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:54.208 13:45:47 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:54.208 13:45:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:54.208 13:45:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:54.208 13:45:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:54.208 13:45:47 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:54.208 13:45:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:54.208 13:45:47 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:54.208 13:45:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:54.208 13:45:47 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:54.208 13:45:47 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:54.208 13:45:47 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:54.208 13:45:47 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:54.208 13:45:47 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:54.208 13:45:47 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:54.208 13:45:47 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:54.208 13:45:47 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:54.208 13:45:47 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:54.208 13:45:47 -- common/autotest_common.sh@1572 -- # return 0 00:04:54.208 13:45:47 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:54.208 13:45:47 -- common/autotest_common.sh@1580 -- # return 0 00:04:54.208 13:45:47 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:54.208 13:45:47 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:54.208 13:45:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:54.208 13:45:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:54.208 13:45:47 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:54.208 13:45:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.208 13:45:47 -- common/autotest_common.sh@10 -- # set +x 00:04:54.208 13:45:47 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:54.208 13:45:47 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:54.208 13:45:47 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:54.208 13:45:47 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:54.208 13:45:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.208 13:45:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.208 13:45:47 -- common/autotest_common.sh@10 -- # set +x 00:04:54.208 ************************************ 00:04:54.208 START TEST env 00:04:54.208 ************************************ 00:04:54.208 13:45:47 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:54.466 * Looking for test storage... 00:04:54.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:54.466 13:45:47 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:54.466 13:45:47 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:54.466 13:45:47 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:54.466 13:45:47 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:54.466 13:45:47 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.466 13:45:47 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.467 13:45:47 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.467 13:45:47 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.467 13:45:47 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.467 13:45:47 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.467 13:45:47 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.467 13:45:47 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.467 13:45:47 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.467 13:45:47 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.467 13:45:47 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.467 13:45:47 env -- scripts/common.sh@344 -- # case "$op" in 00:04:54.467 13:45:47 env -- scripts/common.sh@345 -- # : 1 00:04:54.467 13:45:47 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.467 13:45:47 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.467 13:45:47 env -- scripts/common.sh@365 -- # decimal 1 00:04:54.467 13:45:47 env -- scripts/common.sh@353 -- # local d=1 00:04:54.467 13:45:47 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.467 13:45:47 env -- scripts/common.sh@355 -- # echo 1 00:04:54.467 13:45:47 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.467 13:45:47 env -- scripts/common.sh@366 -- # decimal 2 00:04:54.467 13:45:47 env -- scripts/common.sh@353 -- # local d=2 00:04:54.467 13:45:47 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.467 13:45:47 env -- scripts/common.sh@355 -- # echo 2 00:04:54.467 13:45:47 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.467 13:45:47 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.467 13:45:47 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.467 13:45:47 env -- scripts/common.sh@368 -- # return 0 00:04:54.467 13:45:47 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.467 13:45:47 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:54.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.467 --rc genhtml_branch_coverage=1 00:04:54.467 --rc genhtml_function_coverage=1 00:04:54.467 --rc genhtml_legend=1 00:04:54.467 --rc geninfo_all_blocks=1 00:04:54.467 --rc geninfo_unexecuted_blocks=1 00:04:54.467 00:04:54.467 ' 00:04:54.467 13:45:47 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:54.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.467 --rc genhtml_branch_coverage=1 00:04:54.467 --rc genhtml_function_coverage=1 00:04:54.467 --rc genhtml_legend=1 00:04:54.467 --rc geninfo_all_blocks=1 00:04:54.467 --rc geninfo_unexecuted_blocks=1 00:04:54.467 00:04:54.467 ' 00:04:54.467 13:45:47 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:54.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.467 --rc genhtml_branch_coverage=1 00:04:54.467 --rc genhtml_function_coverage=1 00:04:54.467 --rc genhtml_legend=1 00:04:54.467 --rc geninfo_all_blocks=1 00:04:54.467 --rc geninfo_unexecuted_blocks=1 00:04:54.467 00:04:54.467 ' 00:04:54.467 13:45:47 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:54.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.467 --rc genhtml_branch_coverage=1 00:04:54.467 --rc genhtml_function_coverage=1 00:04:54.467 --rc genhtml_legend=1 00:04:54.467 --rc geninfo_all_blocks=1 00:04:54.467 --rc geninfo_unexecuted_blocks=1 00:04:54.467 00:04:54.467 ' 00:04:54.467 13:45:47 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:54.467 13:45:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.467 13:45:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.467 13:45:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.467 ************************************ 00:04:54.467 START TEST env_memory 00:04:54.467 ************************************ 00:04:54.467 13:45:47 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:54.467 00:04:54.467 00:04:54.467 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.467 http://cunit.sourceforge.net/ 00:04:54.467 00:04:54.467 00:04:54.467 Suite: memory 00:04:54.467 Test: alloc and free memory map ...[2024-12-11 13:45:47.510114] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:54.725 passed 00:04:54.725 Test: mem map translation ...[2024-12-11 13:45:47.543437] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:54.725 [2024-12-11 13:45:47.543476] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:54.725 [2024-12-11 13:45:47.543540] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:54.725 [2024-12-11 13:45:47.543562] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:54.725 passed 00:04:54.725 Test: mem map registration ...[2024-12-11 13:45:47.609353] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:54.725 [2024-12-11 13:45:47.609409] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:54.725 passed 00:04:54.725 Test: mem map adjacent registrations ...passed 00:04:54.725 00:04:54.725 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.725 suites 1 1 n/a 0 0 00:04:54.725 tests 4 4 4 0 0 00:04:54.725 asserts 152 152 152 0 n/a 00:04:54.725 00:04:54.725 Elapsed time = 0.218 seconds 00:04:54.725 ************************************ 00:04:54.725 END TEST env_memory 00:04:54.725 ************************************ 00:04:54.725 00:04:54.725 real 0m0.238s 00:04:54.725 user 0m0.215s 00:04:54.725 sys 0m0.017s 00:04:54.725 13:45:47 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.725 13:45:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:54.725 13:45:47 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:54.725 13:45:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.725 13:45:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.725 13:45:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.725 ************************************ 00:04:54.725 START TEST env_vtophys 00:04:54.725 ************************************ 00:04:54.725 13:45:47 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:54.725 EAL: lib.eal log level changed from notice to debug 00:04:54.726 EAL: Detected lcore 0 as core 0 on socket 0 00:04:54.726 EAL: Detected lcore 1 as core 0 on socket 0 00:04:54.726 EAL: Detected lcore 2 as core 0 on socket 0 00:04:54.726 EAL: Detected lcore 3 as core 0 on socket 0 00:04:54.726 EAL: Detected lcore 4 as core 0 on socket 0 00:04:54.726 EAL: Detected lcore 5 as core 0 on socket 0 00:04:54.726 EAL: Detected lcore 6 as core 0 on socket 0 00:04:54.726 EAL: Detected lcore 7 as core 0 on socket 0 00:04:54.726 EAL: Detected lcore 8 as core 0 on socket 0 00:04:54.726 EAL: Detected lcore 9 as core 0 on socket 0 00:04:54.985 EAL: Maximum logical cores by configuration: 128 00:04:54.985 EAL: Detected CPU lcores: 10 00:04:54.985 EAL: Detected NUMA nodes: 1 00:04:54.985 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:54.985 EAL: Detected shared linkage of DPDK 00:04:54.985 EAL: No shared files mode enabled, IPC will be disabled 00:04:54.985 EAL: Selected IOVA mode 'PA' 00:04:54.985 EAL: Probing VFIO support... 00:04:54.985 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:54.985 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:54.985 EAL: Ask a virtual area of 0x2e000 bytes 00:04:54.985 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:54.985 EAL: Setting up physically contiguous memory... 00:04:54.985 EAL: Setting maximum number of open files to 524288 00:04:54.985 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:54.985 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:54.985 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.985 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:54.985 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.985 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.985 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:54.985 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:54.985 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.985 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:54.985 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.985 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.985 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:54.985 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:54.985 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.985 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:54.985 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.985 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.985 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:54.985 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:54.985 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.985 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:54.985 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.985 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.985 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:54.985 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:54.985 EAL: Hugepages will be freed exactly as allocated. 00:04:54.985 EAL: No shared files mode enabled, IPC is disabled 00:04:54.985 EAL: No shared files mode enabled, IPC is disabled 00:04:54.985 EAL: TSC frequency is ~2200000 KHz 00:04:54.985 EAL: Main lcore 0 is ready (tid=7f5062711a00;cpuset=[0]) 00:04:54.985 EAL: Trying to obtain current memory policy. 00:04:54.985 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.985 EAL: Restoring previous memory policy: 0 00:04:54.985 EAL: request: mp_malloc_sync 00:04:54.985 EAL: No shared files mode enabled, IPC is disabled 00:04:54.985 EAL: Heap on socket 0 was expanded by 2MB 00:04:54.985 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:54.985 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:54.985 EAL: Mem event callback 'spdk:(nil)' registered 00:04:54.985 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:54.985 00:04:54.985 00:04:54.985 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.985 http://cunit.sourceforge.net/ 00:04:54.985 00:04:54.985 00:04:54.985 Suite: components_suite 00:04:54.985 Test: vtophys_malloc_test ...passed 00:04:54.985 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:54.985 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.985 EAL: Restoring previous memory policy: 4 00:04:54.985 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.985 EAL: request: mp_malloc_sync 00:04:54.985 EAL: No shared files mode enabled, IPC is disabled 00:04:54.985 EAL: Heap on socket 0 was expanded by 4MB 00:04:54.985 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.985 EAL: request: mp_malloc_sync 00:04:54.985 EAL: No shared files mode enabled, IPC is disabled 00:04:54.985 EAL: Heap on socket 0 was shrunk by 4MB 00:04:54.985 EAL: Trying to obtain current memory policy. 00:04:54.985 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.985 EAL: Restoring previous memory policy: 4 00:04:54.985 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.985 EAL: request: mp_malloc_sync 00:04:54.985 EAL: No shared files mode enabled, IPC is disabled 00:04:54.985 EAL: Heap on socket 0 was expanded by 6MB 00:04:54.985 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.985 EAL: request: mp_malloc_sync 00:04:54.985 EAL: No shared files mode enabled, IPC is disabled 00:04:54.985 EAL: Heap on socket 0 was shrunk by 6MB 00:04:54.985 EAL: Trying to obtain current memory policy. 00:04:54.985 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.985 EAL: Restoring previous memory policy: 4 00:04:54.985 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.985 EAL: request: mp_malloc_sync 00:04:54.985 EAL: No shared files mode enabled, IPC is disabled 00:04:54.985 EAL: Heap on socket 0 was expanded by 10MB 00:04:54.985 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.985 EAL: request: mp_malloc_sync 00:04:54.985 EAL: No shared files mode enabled, IPC is disabled 00:04:54.985 EAL: Heap on socket 0 was shrunk by 10MB 00:04:54.985 EAL: Trying to obtain current memory policy. 00:04:54.985 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.985 EAL: Restoring previous memory policy: 4 00:04:54.985 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.985 EAL: request: mp_malloc_sync 00:04:54.985 EAL: No shared files mode enabled, IPC is disabled 00:04:54.985 EAL: Heap on socket 0 was expanded by 18MB 00:04:54.985 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.985 EAL: request: mp_malloc_sync 00:04:54.985 EAL: No shared files mode enabled, IPC is disabled 00:04:54.985 EAL: Heap on socket 0 was shrunk by 18MB 00:04:54.985 EAL: Trying to obtain current memory policy. 00:04:54.985 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.985 EAL: Restoring previous memory policy: 4 00:04:54.985 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.985 EAL: request: mp_malloc_sync 00:04:54.985 EAL: No shared files mode enabled, IPC is disabled 00:04:54.985 EAL: Heap on socket 0 was expanded by 34MB 00:04:54.985 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.985 EAL: request: mp_malloc_sync 00:04:54.985 EAL: No shared files mode enabled, IPC is disabled 00:04:54.985 EAL: Heap on socket 0 was shrunk by 34MB 00:04:54.985 EAL: Trying to obtain current memory policy. 00:04:54.985 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.985 EAL: Restoring previous memory policy: 4 00:04:54.985 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.985 EAL: request: mp_malloc_sync 00:04:54.985 EAL: No shared files mode enabled, IPC is disabled 00:04:54.985 EAL: Heap on socket 0 was expanded by 66MB 00:04:54.985 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.985 EAL: request: mp_malloc_sync 00:04:54.985 EAL: No shared files mode enabled, IPC is disabled 00:04:54.985 EAL: Heap on socket 0 was shrunk by 66MB 00:04:54.985 EAL: Trying to obtain current memory policy. 00:04:54.985 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.985 EAL: Restoring previous memory policy: 4 00:04:54.985 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.985 EAL: request: mp_malloc_sync 00:04:54.986 EAL: No shared files mode enabled, IPC is disabled 00:04:54.986 EAL: Heap on socket 0 was expanded by 130MB 00:04:55.245 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.245 EAL: request: mp_malloc_sync 00:04:55.245 EAL: No shared files mode enabled, IPC is disabled 00:04:55.245 EAL: Heap on socket 0 was shrunk by 130MB 00:04:55.245 EAL: Trying to obtain current memory policy. 00:04:55.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.245 EAL: Restoring previous memory policy: 4 00:04:55.245 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.245 EAL: request: mp_malloc_sync 00:04:55.245 EAL: No shared files mode enabled, IPC is disabled 00:04:55.245 EAL: Heap on socket 0 was expanded by 258MB 00:04:55.245 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.245 EAL: request: mp_malloc_sync 00:04:55.245 EAL: No shared files mode enabled, IPC is disabled 00:04:55.245 EAL: Heap on socket 0 was shrunk by 258MB 00:04:55.245 EAL: Trying to obtain current memory policy. 00:04:55.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.503 EAL: Restoring previous memory policy: 4 00:04:55.503 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.503 EAL: request: mp_malloc_sync 00:04:55.503 EAL: No shared files mode enabled, IPC is disabled 00:04:55.503 EAL: Heap on socket 0 was expanded by 514MB 00:04:55.503 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.762 EAL: request: mp_malloc_sync 00:04:55.762 EAL: No shared files mode enabled, IPC is disabled 00:04:55.762 EAL: Heap on socket 0 was shrunk by 514MB 00:04:55.762 EAL: Trying to obtain current memory policy. 00:04:55.762 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.020 EAL: Restoring previous memory policy: 4 00:04:56.020 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.020 EAL: request: mp_malloc_sync 00:04:56.020 EAL: No shared files mode enabled, IPC is disabled 00:04:56.020 EAL: Heap on socket 0 was expanded by 1026MB 00:04:56.020 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.279 passed 00:04:56.279 00:04:56.279 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.279 suites 1 1 n/a 0 0 00:04:56.279 tests 2 2 2 0 0 00:04:56.279 asserts 5379 5379 5379 0 n/a 00:04:56.279 00:04:56.279 Elapsed time = 1.249 seconds 00:04:56.279 EAL: request: mp_malloc_sync 00:04:56.279 EAL: No shared files mode enabled, IPC is disabled 00:04:56.279 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:56.279 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.279 EAL: request: mp_malloc_sync 00:04:56.279 EAL: No shared files mode enabled, IPC is disabled 00:04:56.279 EAL: Heap on socket 0 was shrunk by 2MB 00:04:56.279 EAL: No shared files mode enabled, IPC is disabled 00:04:56.279 EAL: No shared files mode enabled, IPC is disabled 00:04:56.279 EAL: No shared files mode enabled, IPC is disabled 00:04:56.279 00:04:56.279 real 0m1.468s 00:04:56.279 user 0m0.818s 00:04:56.279 sys 0m0.507s 00:04:56.279 13:45:49 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.279 ************************************ 00:04:56.279 END TEST env_vtophys 00:04:56.279 ************************************ 00:04:56.279 13:45:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:56.279 13:45:49 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:56.279 13:45:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.279 13:45:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.279 13:45:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.279 ************************************ 00:04:56.279 START TEST env_pci 00:04:56.279 ************************************ 00:04:56.279 13:45:49 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:56.279 00:04:56.279 00:04:56.279 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.279 http://cunit.sourceforge.net/ 00:04:56.279 00:04:56.279 00:04:56.279 Suite: pci 00:04:56.279 Test: pci_hook ...[2024-12-11 13:45:49.285446] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57882 has claimed it 00:04:56.279 passed 00:04:56.279 00:04:56.279 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.279 suites 1 1 n/a 0 0 00:04:56.279 tests 1 1 1 0 0 00:04:56.279 asserts 25 25 25 0 n/a 00:04:56.279 00:04:56.279 Elapsed time = 0.002 seconds 00:04:56.279 EAL: Cannot find device (10000:00:01.0) 00:04:56.279 EAL: Failed to attach device on primary process 00:04:56.279 00:04:56.279 real 0m0.023s 00:04:56.279 user 0m0.006s 00:04:56.279 sys 0m0.016s 00:04:56.279 13:45:49 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.279 13:45:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:56.279 ************************************ 00:04:56.279 END TEST env_pci 00:04:56.279 ************************************ 00:04:56.538 13:45:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:56.538 13:45:49 env -- env/env.sh@15 -- # uname 00:04:56.538 13:45:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:56.538 13:45:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:56.538 13:45:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:56.538 13:45:49 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:56.538 13:45:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.538 13:45:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.538 ************************************ 00:04:56.538 START TEST env_dpdk_post_init 00:04:56.538 ************************************ 00:04:56.538 13:45:49 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:56.538 EAL: Detected CPU lcores: 10 00:04:56.538 EAL: Detected NUMA nodes: 1 00:04:56.538 EAL: Detected shared linkage of DPDK 00:04:56.538 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:56.538 EAL: Selected IOVA mode 'PA' 00:04:56.538 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:56.538 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:56.538 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:56.538 Starting DPDK initialization... 00:04:56.538 Starting SPDK post initialization... 00:04:56.538 SPDK NVMe probe 00:04:56.538 Attaching to 0000:00:10.0 00:04:56.538 Attaching to 0000:00:11.0 00:04:56.538 Attached to 0000:00:10.0 00:04:56.538 Attached to 0000:00:11.0 00:04:56.538 Cleaning up... 00:04:56.538 00:04:56.538 real 0m0.190s 00:04:56.538 user 0m0.050s 00:04:56.538 sys 0m0.041s 00:04:56.538 13:45:49 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.538 13:45:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.538 ************************************ 00:04:56.538 END TEST env_dpdk_post_init 00:04:56.538 ************************************ 00:04:56.538 13:45:49 env -- env/env.sh@26 -- # uname 00:04:56.797 13:45:49 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:56.797 13:45:49 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:56.797 13:45:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.797 13:45:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.797 13:45:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.797 ************************************ 00:04:56.797 START TEST env_mem_callbacks 00:04:56.797 ************************************ 00:04:56.797 13:45:49 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:56.797 EAL: Detected CPU lcores: 10 00:04:56.797 EAL: Detected NUMA nodes: 1 00:04:56.797 EAL: Detected shared linkage of DPDK 00:04:56.797 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:56.797 EAL: Selected IOVA mode 'PA' 00:04:56.797 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:56.797 00:04:56.797 00:04:56.797 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.797 http://cunit.sourceforge.net/ 00:04:56.797 00:04:56.797 00:04:56.797 Suite: memory 00:04:56.797 Test: test ... 00:04:56.797 register 0x200000200000 2097152 00:04:56.797 malloc 3145728 00:04:56.797 register 0x200000400000 4194304 00:04:56.797 buf 0x200000500000 len 3145728 PASSED 00:04:56.797 malloc 64 00:04:56.797 buf 0x2000004fff40 len 64 PASSED 00:04:56.797 malloc 4194304 00:04:56.797 register 0x200000800000 6291456 00:04:56.797 buf 0x200000a00000 len 4194304 PASSED 00:04:56.797 free 0x200000500000 3145728 00:04:56.797 free 0x2000004fff40 64 00:04:56.797 unregister 0x200000400000 4194304 PASSED 00:04:56.797 free 0x200000a00000 4194304 00:04:56.797 unregister 0x200000800000 6291456 PASSED 00:04:56.797 malloc 8388608 00:04:56.797 register 0x200000400000 10485760 00:04:56.797 buf 0x200000600000 len 8388608 PASSED 00:04:56.797 free 0x200000600000 8388608 00:04:56.797 unregister 0x200000400000 10485760 PASSED 00:04:56.797 passed 00:04:56.797 00:04:56.797 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.797 suites 1 1 n/a 0 0 00:04:56.797 tests 1 1 1 0 0 00:04:56.797 asserts 15 15 15 0 n/a 00:04:56.797 00:04:56.797 Elapsed time = 0.009 seconds 00:04:56.797 00:04:56.797 real 0m0.145s 00:04:56.797 user 0m0.016s 00:04:56.797 sys 0m0.027s 00:04:56.797 13:45:49 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.797 ************************************ 00:04:56.797 END TEST env_mem_callbacks 00:04:56.797 ************************************ 00:04:56.797 13:45:49 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:56.797 00:04:56.797 real 0m2.522s 00:04:56.797 user 0m1.319s 00:04:56.797 sys 0m0.832s 00:04:56.797 13:45:49 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.797 13:45:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.797 ************************************ 00:04:56.797 END TEST env 00:04:56.797 ************************************ 00:04:56.797 13:45:49 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:56.797 13:45:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.798 13:45:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.798 13:45:49 -- common/autotest_common.sh@10 -- # set +x 00:04:56.798 ************************************ 00:04:56.798 START TEST rpc 00:04:56.798 ************************************ 00:04:56.798 13:45:49 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:57.056 * Looking for test storage... 00:04:57.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:57.056 13:45:49 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:57.056 13:45:49 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:57.056 13:45:49 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:57.056 13:45:49 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:57.056 13:45:49 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.056 13:45:49 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.056 13:45:49 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.056 13:45:49 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.057 13:45:49 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.057 13:45:49 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.057 13:45:49 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.057 13:45:49 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.057 13:45:49 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.057 13:45:49 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.057 13:45:49 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.057 13:45:49 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:57.057 13:45:49 rpc -- scripts/common.sh@345 -- # : 1 00:04:57.057 13:45:49 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.057 13:45:49 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.057 13:45:49 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:57.057 13:45:49 rpc -- scripts/common.sh@353 -- # local d=1 00:04:57.057 13:45:49 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.057 13:45:49 rpc -- scripts/common.sh@355 -- # echo 1 00:04:57.057 13:45:50 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.057 13:45:50 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:57.057 13:45:50 rpc -- scripts/common.sh@353 -- # local d=2 00:04:57.057 13:45:50 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.057 13:45:50 rpc -- scripts/common.sh@355 -- # echo 2 00:04:57.057 13:45:50 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.057 13:45:50 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.057 13:45:50 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.057 13:45:50 rpc -- scripts/common.sh@368 -- # return 0 00:04:57.057 13:45:50 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.057 13:45:50 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:57.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.057 --rc genhtml_branch_coverage=1 00:04:57.057 --rc genhtml_function_coverage=1 00:04:57.057 --rc genhtml_legend=1 00:04:57.057 --rc geninfo_all_blocks=1 00:04:57.057 --rc geninfo_unexecuted_blocks=1 00:04:57.057 00:04:57.057 ' 00:04:57.057 13:45:50 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:57.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.057 --rc genhtml_branch_coverage=1 00:04:57.057 --rc genhtml_function_coverage=1 00:04:57.057 --rc genhtml_legend=1 00:04:57.057 --rc geninfo_all_blocks=1 00:04:57.057 --rc geninfo_unexecuted_blocks=1 00:04:57.057 00:04:57.057 ' 00:04:57.057 13:45:50 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:57.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.057 --rc genhtml_branch_coverage=1 00:04:57.057 --rc genhtml_function_coverage=1 00:04:57.057 --rc genhtml_legend=1 00:04:57.057 --rc geninfo_all_blocks=1 00:04:57.057 --rc geninfo_unexecuted_blocks=1 00:04:57.057 00:04:57.057 ' 00:04:57.057 13:45:50 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:57.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.057 --rc genhtml_branch_coverage=1 00:04:57.057 --rc genhtml_function_coverage=1 00:04:57.057 --rc genhtml_legend=1 00:04:57.057 --rc geninfo_all_blocks=1 00:04:57.057 --rc geninfo_unexecuted_blocks=1 00:04:57.057 00:04:57.057 ' 00:04:57.057 13:45:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57999 00:04:57.057 13:45:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.057 13:45:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57999 00:04:57.057 13:45:50 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:57.057 13:45:50 rpc -- common/autotest_common.sh@835 -- # '[' -z 57999 ']' 00:04:57.057 13:45:50 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.057 13:45:50 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.057 13:45:50 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.057 13:45:50 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.057 13:45:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.057 [2024-12-11 13:45:50.082168] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:57.057 [2024-12-11 13:45:50.082344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57999 ] 00:04:57.316 [2024-12-11 13:45:50.246384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.316 [2024-12-11 13:45:50.312176] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:57.316 [2024-12-11 13:45:50.312248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57999' to capture a snapshot of events at runtime. 00:04:57.316 [2024-12-11 13:45:50.312263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:57.316 [2024-12-11 13:45:50.312275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:57.316 [2024-12-11 13:45:50.312284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57999 for offline analysis/debug. 00:04:57.316 [2024-12-11 13:45:50.312785] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.579 [2024-12-11 13:45:50.388091] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:58.146 13:45:51 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.146 13:45:51 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:58.146 13:45:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:58.146 13:45:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:58.146 13:45:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:58.146 13:45:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:58.146 13:45:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.146 13:45:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.146 13:45:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.146 ************************************ 00:04:58.146 START TEST rpc_integrity 00:04:58.146 ************************************ 00:04:58.146 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:58.146 13:45:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:58.146 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.146 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.146 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.146 13:45:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:58.146 13:45:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:58.146 13:45:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:58.407 13:45:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:58.407 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.407 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.407 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.407 13:45:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:58.407 13:45:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:58.407 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.407 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.407 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.407 13:45:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:58.407 { 00:04:58.407 "name": "Malloc0", 00:04:58.407 "aliases": [ 00:04:58.407 "d7e2a606-6e13-4b1a-b32b-bed949d64742" 00:04:58.407 ], 00:04:58.408 "product_name": "Malloc disk", 00:04:58.408 "block_size": 512, 00:04:58.408 "num_blocks": 16384, 00:04:58.408 "uuid": "d7e2a606-6e13-4b1a-b32b-bed949d64742", 00:04:58.408 "assigned_rate_limits": { 00:04:58.408 "rw_ios_per_sec": 0, 00:04:58.408 "rw_mbytes_per_sec": 0, 00:04:58.408 "r_mbytes_per_sec": 0, 00:04:58.408 "w_mbytes_per_sec": 0 00:04:58.408 }, 00:04:58.408 "claimed": false, 00:04:58.408 "zoned": false, 00:04:58.408 "supported_io_types": { 00:04:58.408 "read": true, 00:04:58.408 "write": true, 00:04:58.408 "unmap": true, 00:04:58.408 "flush": true, 00:04:58.408 "reset": true, 00:04:58.408 "nvme_admin": false, 00:04:58.408 "nvme_io": false, 00:04:58.408 "nvme_io_md": false, 00:04:58.408 "write_zeroes": true, 00:04:58.408 "zcopy": true, 00:04:58.408 "get_zone_info": false, 00:04:58.408 "zone_management": false, 00:04:58.408 "zone_append": false, 00:04:58.408 "compare": false, 00:04:58.408 "compare_and_write": false, 00:04:58.408 "abort": true, 00:04:58.408 "seek_hole": false, 00:04:58.408 "seek_data": false, 00:04:58.408 "copy": true, 00:04:58.408 "nvme_iov_md": false 00:04:58.408 }, 00:04:58.408 "memory_domains": [ 00:04:58.408 { 00:04:58.408 "dma_device_id": "system", 00:04:58.408 "dma_device_type": 1 00:04:58.408 }, 00:04:58.408 { 00:04:58.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.408 "dma_device_type": 2 00:04:58.408 } 00:04:58.408 ], 00:04:58.408 "driver_specific": {} 00:04:58.408 } 00:04:58.408 ]' 00:04:58.408 13:45:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:58.408 13:45:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:58.408 13:45:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:58.408 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.408 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.408 [2024-12-11 13:45:51.273757] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:58.408 [2024-12-11 13:45:51.273805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:58.408 [2024-12-11 13:45:51.273824] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c06b90 00:04:58.408 [2024-12-11 13:45:51.273833] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:58.408 [2024-12-11 13:45:51.275510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:58.408 [2024-12-11 13:45:51.275547] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:58.408 Passthru0 00:04:58.408 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.408 13:45:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:58.408 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.408 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.408 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.408 13:45:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:58.408 { 00:04:58.408 "name": "Malloc0", 00:04:58.408 "aliases": [ 00:04:58.408 "d7e2a606-6e13-4b1a-b32b-bed949d64742" 00:04:58.408 ], 00:04:58.408 "product_name": "Malloc disk", 00:04:58.408 "block_size": 512, 00:04:58.408 "num_blocks": 16384, 00:04:58.408 "uuid": "d7e2a606-6e13-4b1a-b32b-bed949d64742", 00:04:58.408 "assigned_rate_limits": { 00:04:58.408 "rw_ios_per_sec": 0, 00:04:58.408 "rw_mbytes_per_sec": 0, 00:04:58.408 "r_mbytes_per_sec": 0, 00:04:58.408 "w_mbytes_per_sec": 0 00:04:58.408 }, 00:04:58.408 "claimed": true, 00:04:58.408 "claim_type": "exclusive_write", 00:04:58.408 "zoned": false, 00:04:58.408 "supported_io_types": { 00:04:58.408 "read": true, 00:04:58.408 "write": true, 00:04:58.408 "unmap": true, 00:04:58.408 "flush": true, 00:04:58.408 "reset": true, 00:04:58.408 "nvme_admin": false, 00:04:58.408 "nvme_io": false, 00:04:58.408 "nvme_io_md": false, 00:04:58.408 "write_zeroes": true, 00:04:58.408 "zcopy": true, 00:04:58.408 "get_zone_info": false, 00:04:58.408 "zone_management": false, 00:04:58.408 "zone_append": false, 00:04:58.408 "compare": false, 00:04:58.408 "compare_and_write": false, 00:04:58.408 "abort": true, 00:04:58.408 "seek_hole": false, 00:04:58.408 "seek_data": false, 00:04:58.408 "copy": true, 00:04:58.408 "nvme_iov_md": false 00:04:58.408 }, 00:04:58.408 "memory_domains": [ 00:04:58.408 { 00:04:58.408 "dma_device_id": "system", 00:04:58.408 "dma_device_type": 1 00:04:58.408 }, 00:04:58.408 { 00:04:58.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.408 "dma_device_type": 2 00:04:58.408 } 00:04:58.408 ], 00:04:58.408 "driver_specific": {} 00:04:58.408 }, 00:04:58.408 { 00:04:58.408 "name": "Passthru0", 00:04:58.408 "aliases": [ 00:04:58.408 "167fd537-1e72-5518-99fe-ed9e98786e37" 00:04:58.408 ], 00:04:58.408 "product_name": "passthru", 00:04:58.408 "block_size": 512, 00:04:58.408 "num_blocks": 16384, 00:04:58.408 "uuid": "167fd537-1e72-5518-99fe-ed9e98786e37", 00:04:58.408 "assigned_rate_limits": { 00:04:58.408 "rw_ios_per_sec": 0, 00:04:58.408 "rw_mbytes_per_sec": 0, 00:04:58.408 "r_mbytes_per_sec": 0, 00:04:58.408 "w_mbytes_per_sec": 0 00:04:58.408 }, 00:04:58.408 "claimed": false, 00:04:58.408 "zoned": false, 00:04:58.408 "supported_io_types": { 00:04:58.408 "read": true, 00:04:58.408 "write": true, 00:04:58.408 "unmap": true, 00:04:58.408 "flush": true, 00:04:58.408 "reset": true, 00:04:58.408 "nvme_admin": false, 00:04:58.408 "nvme_io": false, 00:04:58.408 "nvme_io_md": false, 00:04:58.408 "write_zeroes": true, 00:04:58.408 "zcopy": true, 00:04:58.408 "get_zone_info": false, 00:04:58.408 "zone_management": false, 00:04:58.408 "zone_append": false, 00:04:58.408 "compare": false, 00:04:58.408 "compare_and_write": false, 00:04:58.408 "abort": true, 00:04:58.408 "seek_hole": false, 00:04:58.408 "seek_data": false, 00:04:58.408 "copy": true, 00:04:58.408 "nvme_iov_md": false 00:04:58.408 }, 00:04:58.408 "memory_domains": [ 00:04:58.408 { 00:04:58.408 "dma_device_id": "system", 00:04:58.408 "dma_device_type": 1 00:04:58.408 }, 00:04:58.408 { 00:04:58.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.408 "dma_device_type": 2 00:04:58.408 } 00:04:58.408 ], 00:04:58.408 "driver_specific": { 00:04:58.408 "passthru": { 00:04:58.408 "name": "Passthru0", 00:04:58.408 "base_bdev_name": "Malloc0" 00:04:58.408 } 00:04:58.408 } 00:04:58.408 } 00:04:58.408 ]' 00:04:58.408 13:45:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:58.408 13:45:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:58.408 13:45:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:58.408 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.408 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.408 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.408 13:45:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:58.408 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.408 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.408 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.408 13:45:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:58.408 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.408 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.408 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.408 13:45:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:58.408 13:45:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:58.408 13:45:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:58.408 00:04:58.408 real 0m0.308s 00:04:58.408 user 0m0.204s 00:04:58.408 sys 0m0.036s 00:04:58.408 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.408 13:45:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.408 ************************************ 00:04:58.408 END TEST rpc_integrity 00:04:58.408 ************************************ 00:04:58.667 13:45:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:58.667 13:45:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.667 13:45:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.667 13:45:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.667 ************************************ 00:04:58.667 START TEST rpc_plugins 00:04:58.667 ************************************ 00:04:58.667 13:45:51 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:58.667 13:45:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:58.667 13:45:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.667 13:45:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.667 13:45:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.667 13:45:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:58.667 13:45:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:58.667 13:45:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.667 13:45:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.667 13:45:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.667 13:45:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:58.667 { 00:04:58.667 "name": "Malloc1", 00:04:58.667 "aliases": [ 00:04:58.667 "3ac2ca08-da47-4fe4-9618-27236cd4742c" 00:04:58.667 ], 00:04:58.667 "product_name": "Malloc disk", 00:04:58.667 "block_size": 4096, 00:04:58.667 "num_blocks": 256, 00:04:58.667 "uuid": "3ac2ca08-da47-4fe4-9618-27236cd4742c", 00:04:58.667 "assigned_rate_limits": { 00:04:58.667 "rw_ios_per_sec": 0, 00:04:58.667 "rw_mbytes_per_sec": 0, 00:04:58.667 "r_mbytes_per_sec": 0, 00:04:58.667 "w_mbytes_per_sec": 0 00:04:58.667 }, 00:04:58.667 "claimed": false, 00:04:58.667 "zoned": false, 00:04:58.667 "supported_io_types": { 00:04:58.667 "read": true, 00:04:58.667 "write": true, 00:04:58.667 "unmap": true, 00:04:58.667 "flush": true, 00:04:58.667 "reset": true, 00:04:58.667 "nvme_admin": false, 00:04:58.667 "nvme_io": false, 00:04:58.667 "nvme_io_md": false, 00:04:58.667 "write_zeroes": true, 00:04:58.667 "zcopy": true, 00:04:58.667 "get_zone_info": false, 00:04:58.667 "zone_management": false, 00:04:58.667 "zone_append": false, 00:04:58.667 "compare": false, 00:04:58.667 "compare_and_write": false, 00:04:58.667 "abort": true, 00:04:58.667 "seek_hole": false, 00:04:58.667 "seek_data": false, 00:04:58.667 "copy": true, 00:04:58.667 "nvme_iov_md": false 00:04:58.667 }, 00:04:58.667 "memory_domains": [ 00:04:58.667 { 00:04:58.667 "dma_device_id": "system", 00:04:58.667 "dma_device_type": 1 00:04:58.667 }, 00:04:58.667 { 00:04:58.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.667 "dma_device_type": 2 00:04:58.667 } 00:04:58.667 ], 00:04:58.667 "driver_specific": {} 00:04:58.667 } 00:04:58.667 ]' 00:04:58.667 13:45:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:58.667 13:45:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:58.667 13:45:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:58.667 13:45:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.667 13:45:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.667 13:45:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.667 13:45:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:58.667 13:45:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.667 13:45:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.667 13:45:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.667 13:45:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:58.667 13:45:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:58.667 13:45:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:58.667 00:04:58.667 real 0m0.157s 00:04:58.667 user 0m0.102s 00:04:58.667 sys 0m0.018s 00:04:58.667 13:45:51 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.667 13:45:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.667 ************************************ 00:04:58.667 END TEST rpc_plugins 00:04:58.667 ************************************ 00:04:58.667 13:45:51 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:58.667 13:45:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.667 13:45:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.668 13:45:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.668 ************************************ 00:04:58.668 START TEST rpc_trace_cmd_test 00:04:58.668 ************************************ 00:04:58.668 13:45:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:58.668 13:45:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:58.668 13:45:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:58.668 13:45:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.668 13:45:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:58.668 13:45:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.668 13:45:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:58.668 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57999", 00:04:58.668 "tpoint_group_mask": "0x8", 00:04:58.668 "iscsi_conn": { 00:04:58.668 "mask": "0x2", 00:04:58.668 "tpoint_mask": "0x0" 00:04:58.668 }, 00:04:58.668 "scsi": { 00:04:58.668 "mask": "0x4", 00:04:58.668 "tpoint_mask": "0x0" 00:04:58.668 }, 00:04:58.668 "bdev": { 00:04:58.668 "mask": "0x8", 00:04:58.668 "tpoint_mask": "0xffffffffffffffff" 00:04:58.668 }, 00:04:58.668 "nvmf_rdma": { 00:04:58.668 "mask": "0x10", 00:04:58.668 "tpoint_mask": "0x0" 00:04:58.668 }, 00:04:58.668 "nvmf_tcp": { 00:04:58.668 "mask": "0x20", 00:04:58.668 "tpoint_mask": "0x0" 00:04:58.668 }, 00:04:58.668 "ftl": { 00:04:58.668 "mask": "0x40", 00:04:58.668 "tpoint_mask": "0x0" 00:04:58.668 }, 00:04:58.668 "blobfs": { 00:04:58.668 "mask": "0x80", 00:04:58.668 "tpoint_mask": "0x0" 00:04:58.668 }, 00:04:58.668 "dsa": { 00:04:58.668 "mask": "0x200", 00:04:58.668 "tpoint_mask": "0x0" 00:04:58.668 }, 00:04:58.668 "thread": { 00:04:58.668 "mask": "0x400", 00:04:58.668 "tpoint_mask": "0x0" 00:04:58.668 }, 00:04:58.668 "nvme_pcie": { 00:04:58.668 "mask": "0x800", 00:04:58.668 "tpoint_mask": "0x0" 00:04:58.668 }, 00:04:58.668 "iaa": { 00:04:58.668 "mask": "0x1000", 00:04:58.668 "tpoint_mask": "0x0" 00:04:58.668 }, 00:04:58.668 "nvme_tcp": { 00:04:58.668 "mask": "0x2000", 00:04:58.668 "tpoint_mask": "0x0" 00:04:58.668 }, 00:04:58.668 "bdev_nvme": { 00:04:58.668 "mask": "0x4000", 00:04:58.668 "tpoint_mask": "0x0" 00:04:58.668 }, 00:04:58.668 "sock": { 00:04:58.668 "mask": "0x8000", 00:04:58.668 "tpoint_mask": "0x0" 00:04:58.668 }, 00:04:58.668 "blob": { 00:04:58.668 "mask": "0x10000", 00:04:58.668 "tpoint_mask": "0x0" 00:04:58.668 }, 00:04:58.668 "bdev_raid": { 00:04:58.668 "mask": "0x20000", 00:04:58.668 "tpoint_mask": "0x0" 00:04:58.668 }, 00:04:58.668 "scheduler": { 00:04:58.668 "mask": "0x40000", 00:04:58.668 "tpoint_mask": "0x0" 00:04:58.668 } 00:04:58.668 }' 00:04:58.668 13:45:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:58.926 13:45:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:58.926 13:45:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:58.926 13:45:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:58.926 13:45:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:58.926 13:45:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:58.926 13:45:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:58.926 13:45:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:58.926 13:45:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:58.926 13:45:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:58.926 00:04:58.926 real 0m0.264s 00:04:58.926 user 0m0.235s 00:04:58.926 sys 0m0.019s 00:04:58.926 13:45:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.926 13:45:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:58.926 ************************************ 00:04:58.926 END TEST rpc_trace_cmd_test 00:04:58.926 ************************************ 00:04:59.185 13:45:51 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:59.185 13:45:51 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:59.185 13:45:51 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:59.185 13:45:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.185 13:45:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.185 13:45:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.185 ************************************ 00:04:59.185 START TEST rpc_daemon_integrity 00:04:59.185 ************************************ 00:04:59.185 13:45:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:59.185 13:45:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:59.185 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.185 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.185 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.185 13:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:59.185 13:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:59.185 13:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:59.185 13:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:59.185 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.185 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.185 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.185 13:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:59.185 13:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:59.185 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.185 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.185 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.185 13:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:59.185 { 00:04:59.185 "name": "Malloc2", 00:04:59.185 "aliases": [ 00:04:59.185 "2e556b55-a6ae-4813-8dd1-30444cfcefdb" 00:04:59.185 ], 00:04:59.185 "product_name": "Malloc disk", 00:04:59.186 "block_size": 512, 00:04:59.186 "num_blocks": 16384, 00:04:59.186 "uuid": "2e556b55-a6ae-4813-8dd1-30444cfcefdb", 00:04:59.186 "assigned_rate_limits": { 00:04:59.186 "rw_ios_per_sec": 0, 00:04:59.186 "rw_mbytes_per_sec": 0, 00:04:59.186 "r_mbytes_per_sec": 0, 00:04:59.186 "w_mbytes_per_sec": 0 00:04:59.186 }, 00:04:59.186 "claimed": false, 00:04:59.186 "zoned": false, 00:04:59.186 "supported_io_types": { 00:04:59.186 "read": true, 00:04:59.186 "write": true, 00:04:59.186 "unmap": true, 00:04:59.186 "flush": true, 00:04:59.186 "reset": true, 00:04:59.186 "nvme_admin": false, 00:04:59.186 "nvme_io": false, 00:04:59.186 "nvme_io_md": false, 00:04:59.186 "write_zeroes": true, 00:04:59.186 "zcopy": true, 00:04:59.186 "get_zone_info": false, 00:04:59.186 "zone_management": false, 00:04:59.186 "zone_append": false, 00:04:59.186 "compare": false, 00:04:59.186 "compare_and_write": false, 00:04:59.186 "abort": true, 00:04:59.186 "seek_hole": false, 00:04:59.186 "seek_data": false, 00:04:59.186 "copy": true, 00:04:59.186 "nvme_iov_md": false 00:04:59.186 }, 00:04:59.186 "memory_domains": [ 00:04:59.186 { 00:04:59.186 "dma_device_id": "system", 00:04:59.186 "dma_device_type": 1 00:04:59.186 }, 00:04:59.186 { 00:04:59.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.186 "dma_device_type": 2 00:04:59.186 } 00:04:59.186 ], 00:04:59.186 "driver_specific": {} 00:04:59.186 } 00:04:59.186 ]' 00:04:59.186 13:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:59.186 13:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:59.186 13:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:59.186 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.186 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.186 [2024-12-11 13:45:52.154495] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:59.186 [2024-12-11 13:45:52.154557] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:59.186 [2024-12-11 13:45:52.154575] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c6c440 00:04:59.186 [2024-12-11 13:45:52.154584] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:59.186 [2024-12-11 13:45:52.156618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:59.186 [2024-12-11 13:45:52.156668] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:59.186 Passthru0 00:04:59.186 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.186 13:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:59.186 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.186 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.186 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.186 13:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:59.186 { 00:04:59.186 "name": "Malloc2", 00:04:59.186 "aliases": [ 00:04:59.186 "2e556b55-a6ae-4813-8dd1-30444cfcefdb" 00:04:59.186 ], 00:04:59.186 "product_name": "Malloc disk", 00:04:59.186 "block_size": 512, 00:04:59.186 "num_blocks": 16384, 00:04:59.186 "uuid": "2e556b55-a6ae-4813-8dd1-30444cfcefdb", 00:04:59.186 "assigned_rate_limits": { 00:04:59.186 "rw_ios_per_sec": 0, 00:04:59.186 "rw_mbytes_per_sec": 0, 00:04:59.186 "r_mbytes_per_sec": 0, 00:04:59.186 "w_mbytes_per_sec": 0 00:04:59.186 }, 00:04:59.186 "claimed": true, 00:04:59.186 "claim_type": "exclusive_write", 00:04:59.186 "zoned": false, 00:04:59.186 "supported_io_types": { 00:04:59.186 "read": true, 00:04:59.186 "write": true, 00:04:59.186 "unmap": true, 00:04:59.186 "flush": true, 00:04:59.186 "reset": true, 00:04:59.186 "nvme_admin": false, 00:04:59.186 "nvme_io": false, 00:04:59.186 "nvme_io_md": false, 00:04:59.186 "write_zeroes": true, 00:04:59.186 "zcopy": true, 00:04:59.186 "get_zone_info": false, 00:04:59.186 "zone_management": false, 00:04:59.186 "zone_append": false, 00:04:59.186 "compare": false, 00:04:59.186 "compare_and_write": false, 00:04:59.186 "abort": true, 00:04:59.186 "seek_hole": false, 00:04:59.186 "seek_data": false, 00:04:59.186 "copy": true, 00:04:59.186 "nvme_iov_md": false 00:04:59.186 }, 00:04:59.186 "memory_domains": [ 00:04:59.186 { 00:04:59.186 "dma_device_id": "system", 00:04:59.186 "dma_device_type": 1 00:04:59.186 }, 00:04:59.186 { 00:04:59.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.186 "dma_device_type": 2 00:04:59.186 } 00:04:59.186 ], 00:04:59.186 "driver_specific": {} 00:04:59.186 }, 00:04:59.186 { 00:04:59.186 "name": "Passthru0", 00:04:59.186 "aliases": [ 00:04:59.186 "70cbce2e-e255-535f-a69a-5ce8f56272fc" 00:04:59.186 ], 00:04:59.186 "product_name": "passthru", 00:04:59.186 "block_size": 512, 00:04:59.186 "num_blocks": 16384, 00:04:59.186 "uuid": "70cbce2e-e255-535f-a69a-5ce8f56272fc", 00:04:59.186 "assigned_rate_limits": { 00:04:59.186 "rw_ios_per_sec": 0, 00:04:59.186 "rw_mbytes_per_sec": 0, 00:04:59.186 "r_mbytes_per_sec": 0, 00:04:59.186 "w_mbytes_per_sec": 0 00:04:59.186 }, 00:04:59.186 "claimed": false, 00:04:59.186 "zoned": false, 00:04:59.186 "supported_io_types": { 00:04:59.186 "read": true, 00:04:59.186 "write": true, 00:04:59.186 "unmap": true, 00:04:59.186 "flush": true, 00:04:59.186 "reset": true, 00:04:59.186 "nvme_admin": false, 00:04:59.186 "nvme_io": false, 00:04:59.186 "nvme_io_md": false, 00:04:59.186 "write_zeroes": true, 00:04:59.186 "zcopy": true, 00:04:59.186 "get_zone_info": false, 00:04:59.186 "zone_management": false, 00:04:59.186 "zone_append": false, 00:04:59.186 "compare": false, 00:04:59.186 "compare_and_write": false, 00:04:59.186 "abort": true, 00:04:59.186 "seek_hole": false, 00:04:59.186 "seek_data": false, 00:04:59.186 "copy": true, 00:04:59.186 "nvme_iov_md": false 00:04:59.186 }, 00:04:59.186 "memory_domains": [ 00:04:59.186 { 00:04:59.186 "dma_device_id": "system", 00:04:59.186 "dma_device_type": 1 00:04:59.186 }, 00:04:59.186 { 00:04:59.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.186 "dma_device_type": 2 00:04:59.186 } 00:04:59.186 ], 00:04:59.186 "driver_specific": { 00:04:59.186 "passthru": { 00:04:59.186 "name": "Passthru0", 00:04:59.186 "base_bdev_name": "Malloc2" 00:04:59.186 } 00:04:59.186 } 00:04:59.186 } 00:04:59.186 ]' 00:04:59.186 13:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:59.445 13:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:59.445 13:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:59.445 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.445 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.445 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.445 13:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:59.445 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.445 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.445 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.445 13:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:59.445 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.445 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.445 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.445 13:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:59.445 13:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:59.445 13:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:59.445 00:04:59.445 real 0m0.328s 00:04:59.445 user 0m0.217s 00:04:59.445 sys 0m0.037s 00:04:59.445 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.445 13:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.445 ************************************ 00:04:59.445 END TEST rpc_daemon_integrity 00:04:59.445 ************************************ 00:04:59.445 13:45:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:59.445 13:45:52 rpc -- rpc/rpc.sh@84 -- # killprocess 57999 00:04:59.445 13:45:52 rpc -- common/autotest_common.sh@954 -- # '[' -z 57999 ']' 00:04:59.445 13:45:52 rpc -- common/autotest_common.sh@958 -- # kill -0 57999 00:04:59.445 13:45:52 rpc -- common/autotest_common.sh@959 -- # uname 00:04:59.445 13:45:52 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.445 13:45:52 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57999 00:04:59.445 13:45:52 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.445 13:45:52 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.445 killing process with pid 57999 00:04:59.445 13:45:52 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57999' 00:04:59.445 13:45:52 rpc -- common/autotest_common.sh@973 -- # kill 57999 00:04:59.445 13:45:52 rpc -- common/autotest_common.sh@978 -- # wait 57999 00:05:00.011 00:05:00.011 real 0m2.949s 00:05:00.011 user 0m3.790s 00:05:00.011 sys 0m0.703s 00:05:00.011 13:45:52 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.011 13:45:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.011 ************************************ 00:05:00.011 END TEST rpc 00:05:00.011 ************************************ 00:05:00.011 13:45:52 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:00.011 13:45:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.011 13:45:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.011 13:45:52 -- common/autotest_common.sh@10 -- # set +x 00:05:00.011 ************************************ 00:05:00.011 START TEST skip_rpc 00:05:00.011 ************************************ 00:05:00.011 13:45:52 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:00.011 * Looking for test storage... 00:05:00.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:00.011 13:45:52 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:00.011 13:45:52 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:00.011 13:45:52 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:00.011 13:45:52 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.011 13:45:52 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:00.011 13:45:52 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.011 13:45:52 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:00.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.011 --rc genhtml_branch_coverage=1 00:05:00.011 --rc genhtml_function_coverage=1 00:05:00.011 --rc genhtml_legend=1 00:05:00.011 --rc geninfo_all_blocks=1 00:05:00.011 --rc geninfo_unexecuted_blocks=1 00:05:00.011 00:05:00.011 ' 00:05:00.011 13:45:52 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:00.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.011 --rc genhtml_branch_coverage=1 00:05:00.011 --rc genhtml_function_coverage=1 00:05:00.011 --rc genhtml_legend=1 00:05:00.011 --rc geninfo_all_blocks=1 00:05:00.011 --rc geninfo_unexecuted_blocks=1 00:05:00.011 00:05:00.011 ' 00:05:00.011 13:45:52 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:00.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.011 --rc genhtml_branch_coverage=1 00:05:00.011 --rc genhtml_function_coverage=1 00:05:00.011 --rc genhtml_legend=1 00:05:00.011 --rc geninfo_all_blocks=1 00:05:00.011 --rc geninfo_unexecuted_blocks=1 00:05:00.011 00:05:00.011 ' 00:05:00.011 13:45:52 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:00.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.011 --rc genhtml_branch_coverage=1 00:05:00.011 --rc genhtml_function_coverage=1 00:05:00.011 --rc genhtml_legend=1 00:05:00.011 --rc geninfo_all_blocks=1 00:05:00.011 --rc geninfo_unexecuted_blocks=1 00:05:00.011 00:05:00.011 ' 00:05:00.011 13:45:52 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:00.011 13:45:52 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:00.011 13:45:52 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:00.011 13:45:52 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.011 13:45:52 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.011 13:45:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.011 ************************************ 00:05:00.011 START TEST skip_rpc 00:05:00.011 ************************************ 00:05:00.011 13:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:00.011 13:45:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58205 00:05:00.011 13:45:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.011 13:45:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:00.011 13:45:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:00.270 [2024-12-11 13:45:53.078735] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:00.270 [2024-12-11 13:45:53.078862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58205 ] 00:05:00.270 [2024-12-11 13:45:53.224787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.270 [2024-12-11 13:45:53.268572] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.527 [2024-12-11 13:45:53.339002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:05.793 13:45:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:05.793 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:05.793 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:05.793 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:05.793 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.793 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:05.793 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.793 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:05.793 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.793 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.793 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:05.794 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:05.794 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:05.794 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:05.794 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:05.794 13:45:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:05.794 13:45:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58205 00:05:05.794 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58205 ']' 00:05:05.794 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58205 00:05:05.794 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:05.794 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.794 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58205 00:05:05.794 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.794 killing process with pid 58205 00:05:05.794 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.794 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58205' 00:05:05.794 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58205 00:05:05.794 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58205 00:05:05.794 00:05:05.794 real 0m5.421s 00:05:05.794 user 0m5.046s 00:05:05.794 sys 0m0.286s 00:05:05.794 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.794 13:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.794 ************************************ 00:05:05.794 END TEST skip_rpc 00:05:05.794 ************************************ 00:05:05.794 13:45:58 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:05.794 13:45:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.794 13:45:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.794 13:45:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.794 ************************************ 00:05:05.794 START TEST skip_rpc_with_json 00:05:05.794 ************************************ 00:05:05.794 13:45:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:05.794 13:45:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:05.794 13:45:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58292 00:05:05.794 13:45:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.794 13:45:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.794 13:45:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58292 00:05:05.794 13:45:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58292 ']' 00:05:05.794 13:45:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.794 13:45:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.794 13:45:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.794 13:45:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.794 13:45:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:05.794 [2024-12-11 13:45:58.545374] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:05.794 [2024-12-11 13:45:58.545489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58292 ] 00:05:05.794 [2024-12-11 13:45:58.688644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.794 [2024-12-11 13:45:58.734814] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.794 [2024-12-11 13:45:58.805251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:06.740 13:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.740 13:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:06.740 13:45:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:06.740 13:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.740 13:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.740 [2024-12-11 13:45:59.527914] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:06.740 request: 00:05:06.740 { 00:05:06.740 "trtype": "tcp", 00:05:06.740 "method": "nvmf_get_transports", 00:05:06.740 "req_id": 1 00:05:06.740 } 00:05:06.740 Got JSON-RPC error response 00:05:06.740 response: 00:05:06.740 { 00:05:06.740 "code": -19, 00:05:06.740 "message": "No such device" 00:05:06.740 } 00:05:06.740 13:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:06.740 13:45:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:06.740 13:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.740 13:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.740 [2024-12-11 13:45:59.540040] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:06.740 13:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.740 13:45:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:06.740 13:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.740 13:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.740 13:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.740 13:45:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:06.740 { 00:05:06.740 "subsystems": [ 00:05:06.740 { 00:05:06.740 "subsystem": "fsdev", 00:05:06.740 "config": [ 00:05:06.740 { 00:05:06.740 "method": "fsdev_set_opts", 00:05:06.740 "params": { 00:05:06.740 "fsdev_io_pool_size": 65535, 00:05:06.740 "fsdev_io_cache_size": 256 00:05:06.740 } 00:05:06.740 } 00:05:06.740 ] 00:05:06.740 }, 00:05:06.740 { 00:05:06.740 "subsystem": "keyring", 00:05:06.740 "config": [] 00:05:06.740 }, 00:05:06.740 { 00:05:06.740 "subsystem": "iobuf", 00:05:06.740 "config": [ 00:05:06.740 { 00:05:06.740 "method": "iobuf_set_options", 00:05:06.740 "params": { 00:05:06.740 "small_pool_count": 8192, 00:05:06.740 "large_pool_count": 1024, 00:05:06.740 "small_bufsize": 8192, 00:05:06.740 "large_bufsize": 135168, 00:05:06.740 "enable_numa": false 00:05:06.740 } 00:05:06.740 } 00:05:06.740 ] 00:05:06.740 }, 00:05:06.740 { 00:05:06.740 "subsystem": "sock", 00:05:06.740 "config": [ 00:05:06.740 { 00:05:06.740 "method": "sock_set_default_impl", 00:05:06.740 "params": { 00:05:06.740 "impl_name": "uring" 00:05:06.740 } 00:05:06.740 }, 00:05:06.740 { 00:05:06.740 "method": "sock_impl_set_options", 00:05:06.740 "params": { 00:05:06.740 "impl_name": "ssl", 00:05:06.740 "recv_buf_size": 4096, 00:05:06.740 "send_buf_size": 4096, 00:05:06.740 "enable_recv_pipe": true, 00:05:06.740 "enable_quickack": false, 00:05:06.740 "enable_placement_id": 0, 00:05:06.740 "enable_zerocopy_send_server": true, 00:05:06.740 "enable_zerocopy_send_client": false, 00:05:06.740 "zerocopy_threshold": 0, 00:05:06.740 "tls_version": 0, 00:05:06.740 "enable_ktls": false 00:05:06.740 } 00:05:06.740 }, 00:05:06.740 { 00:05:06.740 "method": "sock_impl_set_options", 00:05:06.740 "params": { 00:05:06.740 "impl_name": "posix", 00:05:06.740 "recv_buf_size": 2097152, 00:05:06.740 "send_buf_size": 2097152, 00:05:06.740 "enable_recv_pipe": true, 00:05:06.740 "enable_quickack": false, 00:05:06.740 "enable_placement_id": 0, 00:05:06.740 "enable_zerocopy_send_server": true, 00:05:06.740 "enable_zerocopy_send_client": false, 00:05:06.740 "zerocopy_threshold": 0, 00:05:06.740 "tls_version": 0, 00:05:06.740 "enable_ktls": false 00:05:06.740 } 00:05:06.740 }, 00:05:06.740 { 00:05:06.740 "method": "sock_impl_set_options", 00:05:06.740 "params": { 00:05:06.740 "impl_name": "uring", 00:05:06.740 "recv_buf_size": 2097152, 00:05:06.740 "send_buf_size": 2097152, 00:05:06.740 "enable_recv_pipe": true, 00:05:06.740 "enable_quickack": false, 00:05:06.740 "enable_placement_id": 0, 00:05:06.740 "enable_zerocopy_send_server": false, 00:05:06.740 "enable_zerocopy_send_client": false, 00:05:06.740 "zerocopy_threshold": 0, 00:05:06.740 "tls_version": 0, 00:05:06.740 "enable_ktls": false 00:05:06.740 } 00:05:06.740 } 00:05:06.740 ] 00:05:06.740 }, 00:05:06.740 { 00:05:06.740 "subsystem": "vmd", 00:05:06.740 "config": [] 00:05:06.740 }, 00:05:06.740 { 00:05:06.740 "subsystem": "accel", 00:05:06.740 "config": [ 00:05:06.740 { 00:05:06.740 "method": "accel_set_options", 00:05:06.740 "params": { 00:05:06.740 "small_cache_size": 128, 00:05:06.740 "large_cache_size": 16, 00:05:06.740 "task_count": 2048, 00:05:06.740 "sequence_count": 2048, 00:05:06.740 "buf_count": 2048 00:05:06.740 } 00:05:06.740 } 00:05:06.740 ] 00:05:06.740 }, 00:05:06.740 { 00:05:06.740 "subsystem": "bdev", 00:05:06.740 "config": [ 00:05:06.740 { 00:05:06.740 "method": "bdev_set_options", 00:05:06.740 "params": { 00:05:06.740 "bdev_io_pool_size": 65535, 00:05:06.740 "bdev_io_cache_size": 256, 00:05:06.740 "bdev_auto_examine": true, 00:05:06.740 "iobuf_small_cache_size": 128, 00:05:06.740 "iobuf_large_cache_size": 16 00:05:06.740 } 00:05:06.740 }, 00:05:06.740 { 00:05:06.740 "method": "bdev_raid_set_options", 00:05:06.740 "params": { 00:05:06.740 "process_window_size_kb": 1024, 00:05:06.740 "process_max_bandwidth_mb_sec": 0 00:05:06.740 } 00:05:06.740 }, 00:05:06.740 { 00:05:06.740 "method": "bdev_iscsi_set_options", 00:05:06.740 "params": { 00:05:06.740 "timeout_sec": 30 00:05:06.740 } 00:05:06.740 }, 00:05:06.740 { 00:05:06.740 "method": "bdev_nvme_set_options", 00:05:06.740 "params": { 00:05:06.740 "action_on_timeout": "none", 00:05:06.740 "timeout_us": 0, 00:05:06.740 "timeout_admin_us": 0, 00:05:06.741 "keep_alive_timeout_ms": 10000, 00:05:06.741 "arbitration_burst": 0, 00:05:06.741 "low_priority_weight": 0, 00:05:06.741 "medium_priority_weight": 0, 00:05:06.741 "high_priority_weight": 0, 00:05:06.741 "nvme_adminq_poll_period_us": 10000, 00:05:06.741 "nvme_ioq_poll_period_us": 0, 00:05:06.741 "io_queue_requests": 0, 00:05:06.741 "delay_cmd_submit": true, 00:05:06.741 "transport_retry_count": 4, 00:05:06.741 "bdev_retry_count": 3, 00:05:06.741 "transport_ack_timeout": 0, 00:05:06.741 "ctrlr_loss_timeout_sec": 0, 00:05:06.741 "reconnect_delay_sec": 0, 00:05:06.741 "fast_io_fail_timeout_sec": 0, 00:05:06.741 "disable_auto_failback": false, 00:05:06.741 "generate_uuids": false, 00:05:06.741 "transport_tos": 0, 00:05:06.741 "nvme_error_stat": false, 00:05:06.741 "rdma_srq_size": 0, 00:05:06.741 "io_path_stat": false, 00:05:06.741 "allow_accel_sequence": false, 00:05:06.741 "rdma_max_cq_size": 0, 00:05:06.741 "rdma_cm_event_timeout_ms": 0, 00:05:06.741 "dhchap_digests": [ 00:05:06.741 "sha256", 00:05:06.741 "sha384", 00:05:06.741 "sha512" 00:05:06.741 ], 00:05:06.741 "dhchap_dhgroups": [ 00:05:06.741 "null", 00:05:06.741 "ffdhe2048", 00:05:06.741 "ffdhe3072", 00:05:06.741 "ffdhe4096", 00:05:06.741 "ffdhe6144", 00:05:06.741 "ffdhe8192" 00:05:06.741 ], 00:05:06.741 "rdma_umr_per_io": false 00:05:06.741 } 00:05:06.741 }, 00:05:06.741 { 00:05:06.741 "method": "bdev_nvme_set_hotplug", 00:05:06.741 "params": { 00:05:06.741 "period_us": 100000, 00:05:06.741 "enable": false 00:05:06.741 } 00:05:06.741 }, 00:05:06.741 { 00:05:06.741 "method": "bdev_wait_for_examine" 00:05:06.741 } 00:05:06.741 ] 00:05:06.741 }, 00:05:06.741 { 00:05:06.741 "subsystem": "scsi", 00:05:06.741 "config": null 00:05:06.741 }, 00:05:06.741 { 00:05:06.741 "subsystem": "scheduler", 00:05:06.741 "config": [ 00:05:06.741 { 00:05:06.741 "method": "framework_set_scheduler", 00:05:06.741 "params": { 00:05:06.741 "name": "static" 00:05:06.741 } 00:05:06.741 } 00:05:06.741 ] 00:05:06.741 }, 00:05:06.741 { 00:05:06.741 "subsystem": "vhost_scsi", 00:05:06.741 "config": [] 00:05:06.741 }, 00:05:06.741 { 00:05:06.741 "subsystem": "vhost_blk", 00:05:06.741 "config": [] 00:05:06.741 }, 00:05:06.741 { 00:05:06.741 "subsystem": "ublk", 00:05:06.741 "config": [] 00:05:06.741 }, 00:05:06.741 { 00:05:06.741 "subsystem": "nbd", 00:05:06.741 "config": [] 00:05:06.741 }, 00:05:06.741 { 00:05:06.741 "subsystem": "nvmf", 00:05:06.741 "config": [ 00:05:06.741 { 00:05:06.741 "method": "nvmf_set_config", 00:05:06.741 "params": { 00:05:06.741 "discovery_filter": "match_any", 00:05:06.741 "admin_cmd_passthru": { 00:05:06.741 "identify_ctrlr": false 00:05:06.741 }, 00:05:06.741 "dhchap_digests": [ 00:05:06.741 "sha256", 00:05:06.741 "sha384", 00:05:06.741 "sha512" 00:05:06.741 ], 00:05:06.741 "dhchap_dhgroups": [ 00:05:06.741 "null", 00:05:06.741 "ffdhe2048", 00:05:06.741 "ffdhe3072", 00:05:06.741 "ffdhe4096", 00:05:06.741 "ffdhe6144", 00:05:06.741 "ffdhe8192" 00:05:06.741 ] 00:05:06.741 } 00:05:06.741 }, 00:05:06.741 { 00:05:06.741 "method": "nvmf_set_max_subsystems", 00:05:06.741 "params": { 00:05:06.741 "max_subsystems": 1024 00:05:06.741 } 00:05:06.741 }, 00:05:06.741 { 00:05:06.741 "method": "nvmf_set_crdt", 00:05:06.741 "params": { 00:05:06.741 "crdt1": 0, 00:05:06.741 "crdt2": 0, 00:05:06.741 "crdt3": 0 00:05:06.741 } 00:05:06.741 }, 00:05:06.741 { 00:05:06.741 "method": "nvmf_create_transport", 00:05:06.741 "params": { 00:05:06.741 "trtype": "TCP", 00:05:06.741 "max_queue_depth": 128, 00:05:06.741 "max_io_qpairs_per_ctrlr": 127, 00:05:06.741 "in_capsule_data_size": 4096, 00:05:06.741 "max_io_size": 131072, 00:05:06.741 "io_unit_size": 131072, 00:05:06.741 "max_aq_depth": 128, 00:05:06.741 "num_shared_buffers": 511, 00:05:06.741 "buf_cache_size": 4294967295, 00:05:06.741 "dif_insert_or_strip": false, 00:05:06.741 "zcopy": false, 00:05:06.741 "c2h_success": true, 00:05:06.741 "sock_priority": 0, 00:05:06.741 "abort_timeout_sec": 1, 00:05:06.741 "ack_timeout": 0, 00:05:06.741 "data_wr_pool_size": 0 00:05:06.741 } 00:05:06.741 } 00:05:06.741 ] 00:05:06.741 }, 00:05:06.741 { 00:05:06.741 "subsystem": "iscsi", 00:05:06.741 "config": [ 00:05:06.741 { 00:05:06.741 "method": "iscsi_set_options", 00:05:06.741 "params": { 00:05:06.741 "node_base": "iqn.2016-06.io.spdk", 00:05:06.741 "max_sessions": 128, 00:05:06.741 "max_connections_per_session": 2, 00:05:06.741 "max_queue_depth": 64, 00:05:06.741 "default_time2wait": 2, 00:05:06.741 "default_time2retain": 20, 00:05:06.741 "first_burst_length": 8192, 00:05:06.741 "immediate_data": true, 00:05:06.741 "allow_duplicated_isid": false, 00:05:06.741 "error_recovery_level": 0, 00:05:06.741 "nop_timeout": 60, 00:05:06.741 "nop_in_interval": 30, 00:05:06.741 "disable_chap": false, 00:05:06.741 "require_chap": false, 00:05:06.741 "mutual_chap": false, 00:05:06.741 "chap_group": 0, 00:05:06.741 "max_large_datain_per_connection": 64, 00:05:06.741 "max_r2t_per_connection": 4, 00:05:06.741 "pdu_pool_size": 36864, 00:05:06.741 "immediate_data_pool_size": 16384, 00:05:06.741 "data_out_pool_size": 2048 00:05:06.741 } 00:05:06.741 } 00:05:06.741 ] 00:05:06.741 } 00:05:06.741 ] 00:05:06.741 } 00:05:06.741 13:45:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:06.741 13:45:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58292 00:05:06.741 13:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58292 ']' 00:05:06.741 13:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58292 00:05:06.741 13:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:06.741 13:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.741 13:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58292 00:05:06.741 13:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.741 13:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.741 killing process with pid 58292 00:05:06.741 13:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58292' 00:05:06.741 13:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58292 00:05:06.741 13:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58292 00:05:07.307 13:46:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58319 00:05:07.307 13:46:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:07.307 13:46:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58319 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58319 ']' 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58319 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58319 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.573 killing process with pid 58319 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58319' 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58319 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58319 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:12.573 00:05:12.573 real 0m7.069s 00:05:12.573 user 0m6.795s 00:05:12.573 sys 0m0.682s 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.573 ************************************ 00:05:12.573 END TEST skip_rpc_with_json 00:05:12.573 ************************************ 00:05:12.573 13:46:05 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:12.573 13:46:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.573 13:46:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.573 13:46:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.573 ************************************ 00:05:12.573 START TEST skip_rpc_with_delay 00:05:12.573 ************************************ 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:12.573 13:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:12.831 [2024-12-11 13:46:05.668172] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:12.831 13:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:12.831 13:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:12.831 13:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:12.831 13:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:12.831 00:05:12.831 real 0m0.090s 00:05:12.831 user 0m0.055s 00:05:12.831 sys 0m0.034s 00:05:12.831 13:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.831 13:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:12.831 ************************************ 00:05:12.831 END TEST skip_rpc_with_delay 00:05:12.831 ************************************ 00:05:12.831 13:46:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:12.831 13:46:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:12.831 13:46:05 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:12.831 13:46:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.831 13:46:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.831 13:46:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.831 ************************************ 00:05:12.831 START TEST exit_on_failed_rpc_init 00:05:12.831 ************************************ 00:05:12.831 13:46:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:12.831 13:46:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58429 00:05:12.831 13:46:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58429 00:05:12.831 13:46:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58429 ']' 00:05:12.831 13:46:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.831 13:46:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.831 13:46:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.831 13:46:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.831 13:46:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.831 13:46:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:12.831 [2024-12-11 13:46:05.815596] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:12.831 [2024-12-11 13:46:05.815722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58429 ] 00:05:13.090 [2024-12-11 13:46:05.966877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.090 [2024-12-11 13:46:06.029099] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.090 [2024-12-11 13:46:06.110451] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:13.348 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.348 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:13.348 13:46:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.348 13:46:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:13.348 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:13.348 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:13.348 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.348 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.348 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.348 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.348 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.348 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.348 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.348 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:13.348 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:13.348 [2024-12-11 13:46:06.392750] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:13.348 [2024-12-11 13:46:06.392848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58439 ] 00:05:13.606 [2024-12-11 13:46:06.543342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.606 [2024-12-11 13:46:06.609781] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.606 [2024-12-11 13:46:06.609900] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:13.606 [2024-12-11 13:46:06.609924] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:13.606 [2024-12-11 13:46:06.609940] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:13.864 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:13.864 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:13.864 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:13.864 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:13.864 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:13.864 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:13.864 13:46:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:13.864 13:46:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58429 00:05:13.864 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58429 ']' 00:05:13.864 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58429 00:05:13.864 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:13.864 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.864 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58429 00:05:13.864 killing process with pid 58429 00:05:13.864 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.864 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.864 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58429' 00:05:13.864 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58429 00:05:13.864 13:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58429 00:05:14.122 ************************************ 00:05:14.122 END TEST exit_on_failed_rpc_init 00:05:14.122 ************************************ 00:05:14.122 00:05:14.122 real 0m1.361s 00:05:14.122 user 0m1.431s 00:05:14.122 sys 0m0.405s 00:05:14.122 13:46:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.122 13:46:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:14.122 13:46:07 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:14.122 00:05:14.122 real 0m14.332s 00:05:14.122 user 0m13.524s 00:05:14.122 sys 0m1.590s 00:05:14.122 ************************************ 00:05:14.122 END TEST skip_rpc 00:05:14.122 ************************************ 00:05:14.122 13:46:07 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.122 13:46:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.380 13:46:07 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:14.380 13:46:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.380 13:46:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.380 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:14.380 ************************************ 00:05:14.380 START TEST rpc_client 00:05:14.380 ************************************ 00:05:14.380 13:46:07 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:14.380 * Looking for test storage... 00:05:14.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:14.380 13:46:07 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:14.380 13:46:07 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:14.380 13:46:07 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:14.380 13:46:07 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:14.380 13:46:07 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.380 13:46:07 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.381 13:46:07 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:14.381 13:46:07 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.381 13:46:07 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:14.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.381 --rc genhtml_branch_coverage=1 00:05:14.381 --rc genhtml_function_coverage=1 00:05:14.381 --rc genhtml_legend=1 00:05:14.381 --rc geninfo_all_blocks=1 00:05:14.381 --rc geninfo_unexecuted_blocks=1 00:05:14.381 00:05:14.381 ' 00:05:14.381 13:46:07 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:14.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.381 --rc genhtml_branch_coverage=1 00:05:14.381 --rc genhtml_function_coverage=1 00:05:14.381 --rc genhtml_legend=1 00:05:14.381 --rc geninfo_all_blocks=1 00:05:14.381 --rc geninfo_unexecuted_blocks=1 00:05:14.381 00:05:14.381 ' 00:05:14.381 13:46:07 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:14.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.381 --rc genhtml_branch_coverage=1 00:05:14.381 --rc genhtml_function_coverage=1 00:05:14.381 --rc genhtml_legend=1 00:05:14.381 --rc geninfo_all_blocks=1 00:05:14.381 --rc geninfo_unexecuted_blocks=1 00:05:14.381 00:05:14.381 ' 00:05:14.381 13:46:07 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:14.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.381 --rc genhtml_branch_coverage=1 00:05:14.381 --rc genhtml_function_coverage=1 00:05:14.381 --rc genhtml_legend=1 00:05:14.381 --rc geninfo_all_blocks=1 00:05:14.381 --rc geninfo_unexecuted_blocks=1 00:05:14.381 00:05:14.381 ' 00:05:14.381 13:46:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:14.381 OK 00:05:14.381 13:46:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:14.381 00:05:14.381 real 0m0.213s 00:05:14.381 user 0m0.128s 00:05:14.381 sys 0m0.093s 00:05:14.381 13:46:07 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.381 13:46:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:14.381 ************************************ 00:05:14.381 END TEST rpc_client 00:05:14.381 ************************************ 00:05:14.639 13:46:07 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:14.639 13:46:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.639 13:46:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.639 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:14.639 ************************************ 00:05:14.639 START TEST json_config 00:05:14.639 ************************************ 00:05:14.639 13:46:07 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:14.639 13:46:07 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:14.639 13:46:07 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:14.639 13:46:07 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:14.639 13:46:07 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:14.639 13:46:07 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.639 13:46:07 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.639 13:46:07 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.639 13:46:07 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.639 13:46:07 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.639 13:46:07 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.639 13:46:07 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.639 13:46:07 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.639 13:46:07 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.639 13:46:07 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.639 13:46:07 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.639 13:46:07 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:14.639 13:46:07 json_config -- scripts/common.sh@345 -- # : 1 00:05:14.639 13:46:07 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.639 13:46:07 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.639 13:46:07 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:14.639 13:46:07 json_config -- scripts/common.sh@353 -- # local d=1 00:05:14.639 13:46:07 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.639 13:46:07 json_config -- scripts/common.sh@355 -- # echo 1 00:05:14.639 13:46:07 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.639 13:46:07 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:14.639 13:46:07 json_config -- scripts/common.sh@353 -- # local d=2 00:05:14.639 13:46:07 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.639 13:46:07 json_config -- scripts/common.sh@355 -- # echo 2 00:05:14.639 13:46:07 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.639 13:46:07 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.639 13:46:07 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.639 13:46:07 json_config -- scripts/common.sh@368 -- # return 0 00:05:14.639 13:46:07 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.639 13:46:07 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:14.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.639 --rc genhtml_branch_coverage=1 00:05:14.639 --rc genhtml_function_coverage=1 00:05:14.639 --rc genhtml_legend=1 00:05:14.639 --rc geninfo_all_blocks=1 00:05:14.639 --rc geninfo_unexecuted_blocks=1 00:05:14.639 00:05:14.639 ' 00:05:14.639 13:46:07 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:14.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.639 --rc genhtml_branch_coverage=1 00:05:14.639 --rc genhtml_function_coverage=1 00:05:14.639 --rc genhtml_legend=1 00:05:14.639 --rc geninfo_all_blocks=1 00:05:14.639 --rc geninfo_unexecuted_blocks=1 00:05:14.639 00:05:14.639 ' 00:05:14.639 13:46:07 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:14.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.639 --rc genhtml_branch_coverage=1 00:05:14.639 --rc genhtml_function_coverage=1 00:05:14.639 --rc genhtml_legend=1 00:05:14.639 --rc geninfo_all_blocks=1 00:05:14.639 --rc geninfo_unexecuted_blocks=1 00:05:14.639 00:05:14.639 ' 00:05:14.639 13:46:07 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:14.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.639 --rc genhtml_branch_coverage=1 00:05:14.639 --rc genhtml_function_coverage=1 00:05:14.640 --rc genhtml_legend=1 00:05:14.640 --rc geninfo_all_blocks=1 00:05:14.640 --rc geninfo_unexecuted_blocks=1 00:05:14.640 00:05:14.640 ' 00:05:14.640 13:46:07 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:14.640 13:46:07 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:14.640 13:46:07 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:14.640 13:46:07 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.640 13:46:07 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.640 13:46:07 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.640 13:46:07 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.640 13:46:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.640 13:46:07 json_config -- paths/export.sh@5 -- # export PATH 00:05:14.640 13:46:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@51 -- # : 0 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:14.640 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:14.640 13:46:07 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:14.640 13:46:07 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:14.640 13:46:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:14.640 13:46:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:14.640 13:46:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:14.640 13:46:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:14.640 13:46:07 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:14.640 13:46:07 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:14.640 13:46:07 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:14.640 13:46:07 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:14.640 13:46:07 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:14.640 13:46:07 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:14.640 13:46:07 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:14.640 13:46:07 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:14.640 13:46:07 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:14.640 13:46:07 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:14.640 13:46:07 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:14.640 INFO: JSON configuration test init 00:05:14.640 13:46:07 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:14.640 13:46:07 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:14.640 13:46:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.640 13:46:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.640 13:46:07 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:14.640 13:46:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.640 13:46:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.640 13:46:07 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:14.640 13:46:07 json_config -- json_config/common.sh@9 -- # local app=target 00:05:14.640 13:46:07 json_config -- json_config/common.sh@10 -- # shift 00:05:14.640 Waiting for target to run... 00:05:14.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.640 13:46:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:14.640 13:46:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:14.640 13:46:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:14.640 13:46:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.640 13:46:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.640 13:46:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58579 00:05:14.640 13:46:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:14.640 13:46:07 json_config -- json_config/common.sh@25 -- # waitforlisten 58579 /var/tmp/spdk_tgt.sock 00:05:14.640 13:46:07 json_config -- common/autotest_common.sh@835 -- # '[' -z 58579 ']' 00:05:14.640 13:46:07 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:14.640 13:46:07 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.640 13:46:07 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.640 13:46:07 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.640 13:46:07 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.640 13:46:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.898 [2024-12-11 13:46:07.734210] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:14.899 [2024-12-11 13:46:07.735105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58579 ] 00:05:15.157 [2024-12-11 13:46:08.173412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.416 [2024-12-11 13:46:08.222820] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.982 13:46:08 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.982 13:46:08 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:15.982 00:05:15.982 13:46:08 json_config -- json_config/common.sh@26 -- # echo '' 00:05:15.982 13:46:08 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:15.982 13:46:08 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:15.982 13:46:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.982 13:46:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.982 13:46:08 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:15.982 13:46:08 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:15.982 13:46:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:15.982 13:46:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.982 13:46:08 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:15.982 13:46:08 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:15.983 13:46:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:16.241 [2024-12-11 13:46:09.082009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:16.241 13:46:09 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:16.241 13:46:09 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:16.241 13:46:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.241 13:46:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.241 13:46:09 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:16.241 13:46:09 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:16.241 13:46:09 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:16.241 13:46:09 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:16.241 13:46:09 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:16.241 13:46:09 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:16.241 13:46:09 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:16.241 13:46:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:16.809 13:46:09 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:16.809 13:46:09 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:16.809 13:46:09 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:16.809 13:46:09 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:16.809 13:46:09 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:16.809 13:46:09 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:16.809 13:46:09 json_config -- json_config/json_config.sh@54 -- # sort 00:05:16.809 13:46:09 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:16.809 13:46:09 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:16.809 13:46:09 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:16.809 13:46:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.809 13:46:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.809 13:46:09 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:16.809 13:46:09 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:16.809 13:46:09 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:16.809 13:46:09 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:16.809 13:46:09 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:16.809 13:46:09 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:16.809 13:46:09 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:16.809 13:46:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.809 13:46:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.809 13:46:09 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:16.809 13:46:09 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:16.809 13:46:09 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:16.809 13:46:09 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:16.809 13:46:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:17.067 MallocForNvmf0 00:05:17.067 13:46:09 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:17.067 13:46:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:17.324 MallocForNvmf1 00:05:17.324 13:46:10 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:17.324 13:46:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:17.582 [2024-12-11 13:46:10.442118] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.583 13:46:10 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:17.583 13:46:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:17.841 13:46:10 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:17.841 13:46:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:18.100 13:46:10 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:18.100 13:46:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:18.359 13:46:11 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:18.359 13:46:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:18.618 [2024-12-11 13:46:11.470753] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:18.618 13:46:11 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:18.618 13:46:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:18.618 13:46:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.618 13:46:11 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:18.618 13:46:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:18.618 13:46:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.618 13:46:11 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:18.618 13:46:11 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:18.618 13:46:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:18.877 MallocBdevForConfigChangeCheck 00:05:18.877 13:46:11 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:18.877 13:46:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:18.877 13:46:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.877 13:46:11 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:18.877 13:46:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.443 INFO: shutting down applications... 00:05:19.443 13:46:12 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:19.443 13:46:12 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:19.443 13:46:12 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:19.443 13:46:12 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:19.443 13:46:12 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:19.701 Calling clear_iscsi_subsystem 00:05:19.701 Calling clear_nvmf_subsystem 00:05:19.701 Calling clear_nbd_subsystem 00:05:19.701 Calling clear_ublk_subsystem 00:05:19.701 Calling clear_vhost_blk_subsystem 00:05:19.701 Calling clear_vhost_scsi_subsystem 00:05:19.701 Calling clear_bdev_subsystem 00:05:19.701 13:46:12 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:19.701 13:46:12 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:19.701 13:46:12 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:19.701 13:46:12 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.701 13:46:12 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:19.701 13:46:12 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:20.268 13:46:13 json_config -- json_config/json_config.sh@352 -- # break 00:05:20.268 13:46:13 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:20.268 13:46:13 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:20.268 13:46:13 json_config -- json_config/common.sh@31 -- # local app=target 00:05:20.268 13:46:13 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:20.268 13:46:13 json_config -- json_config/common.sh@35 -- # [[ -n 58579 ]] 00:05:20.268 13:46:13 json_config -- json_config/common.sh@38 -- # kill -SIGINT 58579 00:05:20.268 13:46:13 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:20.268 13:46:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.268 13:46:13 json_config -- json_config/common.sh@41 -- # kill -0 58579 00:05:20.268 13:46:13 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.835 13:46:13 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.835 13:46:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.835 13:46:13 json_config -- json_config/common.sh@41 -- # kill -0 58579 00:05:20.835 13:46:13 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:20.835 SPDK target shutdown done 00:05:20.835 13:46:13 json_config -- json_config/common.sh@43 -- # break 00:05:20.835 13:46:13 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:20.835 13:46:13 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:20.835 INFO: relaunching applications... 00:05:20.835 13:46:13 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:20.835 13:46:13 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:20.835 13:46:13 json_config -- json_config/common.sh@9 -- # local app=target 00:05:20.835 13:46:13 json_config -- json_config/common.sh@10 -- # shift 00:05:20.835 13:46:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:20.835 13:46:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:20.835 13:46:13 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:20.835 13:46:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.835 13:46:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.835 Waiting for target to run... 00:05:20.835 13:46:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58774 00:05:20.835 13:46:13 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:20.835 13:46:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:20.835 13:46:13 json_config -- json_config/common.sh@25 -- # waitforlisten 58774 /var/tmp/spdk_tgt.sock 00:05:20.835 13:46:13 json_config -- common/autotest_common.sh@835 -- # '[' -z 58774 ']' 00:05:20.835 13:46:13 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:20.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:20.835 13:46:13 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.835 13:46:13 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:20.835 13:46:13 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.835 13:46:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.835 [2024-12-11 13:46:13.677140] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:20.835 [2024-12-11 13:46:13.677248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58774 ] 00:05:21.093 [2024-12-11 13:46:14.110127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.351 [2024-12-11 13:46:14.155049] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.351 [2024-12-11 13:46:14.293573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:21.610 [2024-12-11 13:46:14.510379] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:21.610 [2024-12-11 13:46:14.542479] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:21.868 00:05:21.868 INFO: Checking if target configuration is the same... 00:05:21.868 13:46:14 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.868 13:46:14 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:21.868 13:46:14 json_config -- json_config/common.sh@26 -- # echo '' 00:05:21.868 13:46:14 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:21.868 13:46:14 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:21.868 13:46:14 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:21.868 13:46:14 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:21.868 13:46:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.868 + '[' 2 -ne 2 ']' 00:05:21.868 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:21.868 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:21.868 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:21.868 +++ basename /dev/fd/62 00:05:21.868 ++ mktemp /tmp/62.XXX 00:05:21.868 + tmp_file_1=/tmp/62.Zg3 00:05:21.868 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:21.868 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:21.868 + tmp_file_2=/tmp/spdk_tgt_config.json.C9M 00:05:21.868 + ret=0 00:05:21.868 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:22.127 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:22.385 + diff -u /tmp/62.Zg3 /tmp/spdk_tgt_config.json.C9M 00:05:22.385 INFO: JSON config files are the same 00:05:22.385 + echo 'INFO: JSON config files are the same' 00:05:22.385 + rm /tmp/62.Zg3 /tmp/spdk_tgt_config.json.C9M 00:05:22.385 + exit 0 00:05:22.385 INFO: changing configuration and checking if this can be detected... 00:05:22.385 13:46:15 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:22.385 13:46:15 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:22.385 13:46:15 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:22.385 13:46:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:22.644 13:46:15 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:22.644 13:46:15 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:22.644 13:46:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.644 + '[' 2 -ne 2 ']' 00:05:22.644 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:22.644 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:22.644 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:22.644 +++ basename /dev/fd/62 00:05:22.644 ++ mktemp /tmp/62.XXX 00:05:22.644 + tmp_file_1=/tmp/62.GsV 00:05:22.644 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:22.644 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:22.644 + tmp_file_2=/tmp/spdk_tgt_config.json.9KH 00:05:22.644 + ret=0 00:05:22.644 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:22.903 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:23.162 + diff -u /tmp/62.GsV /tmp/spdk_tgt_config.json.9KH 00:05:23.162 + ret=1 00:05:23.162 + echo '=== Start of file: /tmp/62.GsV ===' 00:05:23.162 + cat /tmp/62.GsV 00:05:23.162 + echo '=== End of file: /tmp/62.GsV ===' 00:05:23.162 + echo '' 00:05:23.162 + echo '=== Start of file: /tmp/spdk_tgt_config.json.9KH ===' 00:05:23.162 + cat /tmp/spdk_tgt_config.json.9KH 00:05:23.162 + echo '=== End of file: /tmp/spdk_tgt_config.json.9KH ===' 00:05:23.162 + echo '' 00:05:23.162 + rm /tmp/62.GsV /tmp/spdk_tgt_config.json.9KH 00:05:23.162 + exit 1 00:05:23.162 INFO: configuration change detected. 00:05:23.162 13:46:15 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:23.162 13:46:15 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:23.162 13:46:15 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:23.162 13:46:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.162 13:46:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.162 13:46:16 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:23.162 13:46:16 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:23.162 13:46:16 json_config -- json_config/json_config.sh@324 -- # [[ -n 58774 ]] 00:05:23.162 13:46:16 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:23.162 13:46:16 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:23.162 13:46:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.162 13:46:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.162 13:46:16 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:23.162 13:46:16 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:23.162 13:46:16 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:23.162 13:46:16 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:23.162 13:46:16 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:23.162 13:46:16 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:23.162 13:46:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.162 13:46:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.162 13:46:16 json_config -- json_config/json_config.sh@330 -- # killprocess 58774 00:05:23.162 13:46:16 json_config -- common/autotest_common.sh@954 -- # '[' -z 58774 ']' 00:05:23.162 13:46:16 json_config -- common/autotest_common.sh@958 -- # kill -0 58774 00:05:23.162 13:46:16 json_config -- common/autotest_common.sh@959 -- # uname 00:05:23.162 13:46:16 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.162 13:46:16 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58774 00:05:23.162 killing process with pid 58774 00:05:23.162 13:46:16 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.162 13:46:16 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.162 13:46:16 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58774' 00:05:23.162 13:46:16 json_config -- common/autotest_common.sh@973 -- # kill 58774 00:05:23.162 13:46:16 json_config -- common/autotest_common.sh@978 -- # wait 58774 00:05:23.421 13:46:16 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:23.421 13:46:16 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:23.421 13:46:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.421 13:46:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.421 INFO: Success 00:05:23.421 13:46:16 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:23.421 13:46:16 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:23.421 ************************************ 00:05:23.421 END TEST json_config 00:05:23.421 ************************************ 00:05:23.421 00:05:23.421 real 0m8.907s 00:05:23.421 user 0m12.838s 00:05:23.421 sys 0m1.794s 00:05:23.421 13:46:16 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.421 13:46:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.421 13:46:16 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:23.421 13:46:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.421 13:46:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.421 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:05:23.421 ************************************ 00:05:23.421 START TEST json_config_extra_key 00:05:23.421 ************************************ 00:05:23.421 13:46:16 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:23.680 13:46:16 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.680 13:46:16 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.680 13:46:16 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.680 13:46:16 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.680 13:46:16 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:23.680 13:46:16 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.680 13:46:16 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.680 --rc genhtml_branch_coverage=1 00:05:23.680 --rc genhtml_function_coverage=1 00:05:23.680 --rc genhtml_legend=1 00:05:23.680 --rc geninfo_all_blocks=1 00:05:23.680 --rc geninfo_unexecuted_blocks=1 00:05:23.680 00:05:23.680 ' 00:05:23.680 13:46:16 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.680 --rc genhtml_branch_coverage=1 00:05:23.680 --rc genhtml_function_coverage=1 00:05:23.680 --rc genhtml_legend=1 00:05:23.680 --rc geninfo_all_blocks=1 00:05:23.680 --rc geninfo_unexecuted_blocks=1 00:05:23.680 00:05:23.680 ' 00:05:23.680 13:46:16 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.680 --rc genhtml_branch_coverage=1 00:05:23.680 --rc genhtml_function_coverage=1 00:05:23.680 --rc genhtml_legend=1 00:05:23.680 --rc geninfo_all_blocks=1 00:05:23.680 --rc geninfo_unexecuted_blocks=1 00:05:23.680 00:05:23.680 ' 00:05:23.680 13:46:16 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.680 --rc genhtml_branch_coverage=1 00:05:23.680 --rc genhtml_function_coverage=1 00:05:23.680 --rc genhtml_legend=1 00:05:23.680 --rc geninfo_all_blocks=1 00:05:23.680 --rc geninfo_unexecuted_blocks=1 00:05:23.680 00:05:23.680 ' 00:05:23.680 13:46:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:23.680 13:46:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:23.680 13:46:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.680 13:46:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.680 13:46:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.680 13:46:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.680 13:46:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.680 13:46:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.680 13:46:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.680 13:46:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.680 13:46:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.681 13:46:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.681 13:46:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:05:23.681 13:46:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:05:23.681 13:46:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.681 13:46:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.681 13:46:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:23.681 13:46:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.681 13:46:16 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:23.681 13:46:16 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:23.681 13:46:16 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.681 13:46:16 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.681 13:46:16 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.681 13:46:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.681 13:46:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.681 13:46:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.681 13:46:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:23.681 13:46:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.681 13:46:16 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:23.681 13:46:16 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:23.681 13:46:16 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:23.681 13:46:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.681 13:46:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.681 13:46:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.681 13:46:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:23.681 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:23.681 13:46:16 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:23.681 13:46:16 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:23.681 13:46:16 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:23.681 13:46:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:23.681 13:46:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:23.681 13:46:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:23.681 13:46:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:23.681 13:46:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:23.681 13:46:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:23.681 13:46:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:23.681 13:46:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:23.681 13:46:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:23.681 13:46:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:23.681 INFO: launching applications... 00:05:23.681 13:46:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:23.681 13:46:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:23.681 13:46:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:23.681 13:46:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:23.681 13:46:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:23.681 13:46:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:23.681 13:46:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:23.681 13:46:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.681 13:46:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.681 13:46:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58928 00:05:23.681 Waiting for target to run... 00:05:23.681 13:46:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:23.681 13:46:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58928 /var/tmp/spdk_tgt.sock 00:05:23.681 13:46:16 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58928 ']' 00:05:23.681 13:46:16 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:23.681 13:46:16 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.681 13:46:16 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.681 13:46:16 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.681 13:46:16 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.681 13:46:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:23.940 [2024-12-11 13:46:16.730277] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:23.940 [2024-12-11 13:46:16.730393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58928 ] 00:05:24.199 [2024-12-11 13:46:17.185403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.199 [2024-12-11 13:46:17.233945] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.458 [2024-12-11 13:46:17.268749] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:24.717 13:46:17 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.717 00:05:24.717 13:46:17 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:24.717 13:46:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:24.717 INFO: shutting down applications... 00:05:24.717 13:46:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:24.717 13:46:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:24.717 13:46:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:24.717 13:46:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:24.717 13:46:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58928 ]] 00:05:24.717 13:46:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58928 00:05:24.717 13:46:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:24.717 13:46:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.717 13:46:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58928 00:05:24.717 13:46:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:25.283 13:46:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:25.283 13:46:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.283 13:46:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58928 00:05:25.283 13:46:18 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:25.283 13:46:18 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:25.283 13:46:18 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:25.283 SPDK target shutdown done 00:05:25.283 13:46:18 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:25.283 Success 00:05:25.283 13:46:18 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:25.283 00:05:25.283 real 0m1.827s 00:05:25.283 user 0m1.729s 00:05:25.283 sys 0m0.448s 00:05:25.283 13:46:18 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.283 13:46:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:25.283 ************************************ 00:05:25.283 END TEST json_config_extra_key 00:05:25.283 ************************************ 00:05:25.283 13:46:18 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:25.283 13:46:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.283 13:46:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.283 13:46:18 -- common/autotest_common.sh@10 -- # set +x 00:05:25.283 ************************************ 00:05:25.283 START TEST alias_rpc 00:05:25.283 ************************************ 00:05:25.283 13:46:18 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:25.542 * Looking for test storage... 00:05:25.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:25.542 13:46:18 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:25.542 13:46:18 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:25.542 13:46:18 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:25.542 13:46:18 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.542 13:46:18 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:25.542 13:46:18 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.542 13:46:18 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:25.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.542 --rc genhtml_branch_coverage=1 00:05:25.542 --rc genhtml_function_coverage=1 00:05:25.542 --rc genhtml_legend=1 00:05:25.542 --rc geninfo_all_blocks=1 00:05:25.542 --rc geninfo_unexecuted_blocks=1 00:05:25.542 00:05:25.542 ' 00:05:25.542 13:46:18 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:25.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.542 --rc genhtml_branch_coverage=1 00:05:25.542 --rc genhtml_function_coverage=1 00:05:25.542 --rc genhtml_legend=1 00:05:25.542 --rc geninfo_all_blocks=1 00:05:25.542 --rc geninfo_unexecuted_blocks=1 00:05:25.542 00:05:25.542 ' 00:05:25.542 13:46:18 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:25.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.542 --rc genhtml_branch_coverage=1 00:05:25.542 --rc genhtml_function_coverage=1 00:05:25.542 --rc genhtml_legend=1 00:05:25.542 --rc geninfo_all_blocks=1 00:05:25.542 --rc geninfo_unexecuted_blocks=1 00:05:25.542 00:05:25.542 ' 00:05:25.542 13:46:18 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:25.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.542 --rc genhtml_branch_coverage=1 00:05:25.542 --rc genhtml_function_coverage=1 00:05:25.542 --rc genhtml_legend=1 00:05:25.542 --rc geninfo_all_blocks=1 00:05:25.542 --rc geninfo_unexecuted_blocks=1 00:05:25.542 00:05:25.542 ' 00:05:25.542 13:46:18 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:25.542 13:46:18 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59001 00:05:25.542 13:46:18 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:25.542 13:46:18 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59001 00:05:25.542 13:46:18 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59001 ']' 00:05:25.542 13:46:18 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.542 13:46:18 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.542 13:46:18 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.542 13:46:18 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.543 13:46:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.543 [2024-12-11 13:46:18.550360] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:25.543 [2024-12-11 13:46:18.550474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59001 ] 00:05:25.801 [2024-12-11 13:46:18.701476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.801 [2024-12-11 13:46:18.767731] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.060 [2024-12-11 13:46:18.848847] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:26.060 13:46:19 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.060 13:46:19 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:26.060 13:46:19 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:26.633 13:46:19 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59001 00:05:26.633 13:46:19 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59001 ']' 00:05:26.633 13:46:19 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59001 00:05:26.633 13:46:19 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:26.633 13:46:19 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.633 13:46:19 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59001 00:05:26.633 13:46:19 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.633 13:46:19 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.633 killing process with pid 59001 00:05:26.633 13:46:19 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59001' 00:05:26.633 13:46:19 alias_rpc -- common/autotest_common.sh@973 -- # kill 59001 00:05:26.633 13:46:19 alias_rpc -- common/autotest_common.sh@978 -- # wait 59001 00:05:26.904 ************************************ 00:05:26.904 END TEST alias_rpc 00:05:26.904 00:05:26.904 real 0m1.500s 00:05:26.904 user 0m1.597s 00:05:26.904 sys 0m0.440s 00:05:26.904 13:46:19 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.904 13:46:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.904 ************************************ 00:05:26.904 13:46:19 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:26.904 13:46:19 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:26.904 13:46:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.904 13:46:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.904 13:46:19 -- common/autotest_common.sh@10 -- # set +x 00:05:26.904 ************************************ 00:05:26.904 START TEST spdkcli_tcp 00:05:26.904 ************************************ 00:05:26.904 13:46:19 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:26.904 * Looking for test storage... 00:05:26.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:26.904 13:46:19 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:26.904 13:46:19 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:26.904 13:46:19 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:27.163 13:46:20 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.163 13:46:20 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:27.163 13:46:20 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.163 13:46:20 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:27.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.163 --rc genhtml_branch_coverage=1 00:05:27.163 --rc genhtml_function_coverage=1 00:05:27.163 --rc genhtml_legend=1 00:05:27.163 --rc geninfo_all_blocks=1 00:05:27.163 --rc geninfo_unexecuted_blocks=1 00:05:27.163 00:05:27.163 ' 00:05:27.163 13:46:20 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:27.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.163 --rc genhtml_branch_coverage=1 00:05:27.163 --rc genhtml_function_coverage=1 00:05:27.163 --rc genhtml_legend=1 00:05:27.163 --rc geninfo_all_blocks=1 00:05:27.163 --rc geninfo_unexecuted_blocks=1 00:05:27.163 00:05:27.163 ' 00:05:27.163 13:46:20 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:27.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.163 --rc genhtml_branch_coverage=1 00:05:27.163 --rc genhtml_function_coverage=1 00:05:27.163 --rc genhtml_legend=1 00:05:27.163 --rc geninfo_all_blocks=1 00:05:27.163 --rc geninfo_unexecuted_blocks=1 00:05:27.163 00:05:27.163 ' 00:05:27.163 13:46:20 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:27.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.163 --rc genhtml_branch_coverage=1 00:05:27.163 --rc genhtml_function_coverage=1 00:05:27.163 --rc genhtml_legend=1 00:05:27.163 --rc geninfo_all_blocks=1 00:05:27.163 --rc geninfo_unexecuted_blocks=1 00:05:27.163 00:05:27.163 ' 00:05:27.163 13:46:20 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:27.163 13:46:20 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:27.163 13:46:20 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:27.163 13:46:20 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:27.163 13:46:20 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:27.163 13:46:20 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:27.163 13:46:20 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:27.163 13:46:20 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:27.163 13:46:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.163 13:46:20 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59083 00:05:27.163 13:46:20 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59083 00:05:27.163 13:46:20 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:27.163 13:46:20 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59083 ']' 00:05:27.163 13:46:20 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.164 13:46:20 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.164 13:46:20 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.164 13:46:20 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.164 13:46:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.164 [2024-12-11 13:46:20.106349] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:27.164 [2024-12-11 13:46:20.106468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59083 ] 00:05:27.422 [2024-12-11 13:46:20.254081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.422 [2024-12-11 13:46:20.302668] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.422 [2024-12-11 13:46:20.302681] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.422 [2024-12-11 13:46:20.373591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:27.680 13:46:20 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.680 13:46:20 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:27.680 13:46:20 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59087 00:05:27.680 13:46:20 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:27.680 13:46:20 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:27.939 [ 00:05:27.939 "bdev_malloc_delete", 00:05:27.939 "bdev_malloc_create", 00:05:27.939 "bdev_null_resize", 00:05:27.939 "bdev_null_delete", 00:05:27.939 "bdev_null_create", 00:05:27.939 "bdev_nvme_cuse_unregister", 00:05:27.939 "bdev_nvme_cuse_register", 00:05:27.939 "bdev_opal_new_user", 00:05:27.939 "bdev_opal_set_lock_state", 00:05:27.939 "bdev_opal_delete", 00:05:27.939 "bdev_opal_get_info", 00:05:27.939 "bdev_opal_create", 00:05:27.939 "bdev_nvme_opal_revert", 00:05:27.939 "bdev_nvme_opal_init", 00:05:27.939 "bdev_nvme_send_cmd", 00:05:27.939 "bdev_nvme_set_keys", 00:05:27.939 "bdev_nvme_get_path_iostat", 00:05:27.939 "bdev_nvme_get_mdns_discovery_info", 00:05:27.939 "bdev_nvme_stop_mdns_discovery", 00:05:27.939 "bdev_nvme_start_mdns_discovery", 00:05:27.939 "bdev_nvme_set_multipath_policy", 00:05:27.939 "bdev_nvme_set_preferred_path", 00:05:27.939 "bdev_nvme_get_io_paths", 00:05:27.939 "bdev_nvme_remove_error_injection", 00:05:27.939 "bdev_nvme_add_error_injection", 00:05:27.939 "bdev_nvme_get_discovery_info", 00:05:27.939 "bdev_nvme_stop_discovery", 00:05:27.939 "bdev_nvme_start_discovery", 00:05:27.939 "bdev_nvme_get_controller_health_info", 00:05:27.939 "bdev_nvme_disable_controller", 00:05:27.939 "bdev_nvme_enable_controller", 00:05:27.939 "bdev_nvme_reset_controller", 00:05:27.939 "bdev_nvme_get_transport_statistics", 00:05:27.939 "bdev_nvme_apply_firmware", 00:05:27.939 "bdev_nvme_detach_controller", 00:05:27.939 "bdev_nvme_get_controllers", 00:05:27.939 "bdev_nvme_attach_controller", 00:05:27.939 "bdev_nvme_set_hotplug", 00:05:27.939 "bdev_nvme_set_options", 00:05:27.939 "bdev_passthru_delete", 00:05:27.939 "bdev_passthru_create", 00:05:27.939 "bdev_lvol_set_parent_bdev", 00:05:27.939 "bdev_lvol_set_parent", 00:05:27.939 "bdev_lvol_check_shallow_copy", 00:05:27.939 "bdev_lvol_start_shallow_copy", 00:05:27.939 "bdev_lvol_grow_lvstore", 00:05:27.939 "bdev_lvol_get_lvols", 00:05:27.939 "bdev_lvol_get_lvstores", 00:05:27.939 "bdev_lvol_delete", 00:05:27.939 "bdev_lvol_set_read_only", 00:05:27.939 "bdev_lvol_resize", 00:05:27.939 "bdev_lvol_decouple_parent", 00:05:27.939 "bdev_lvol_inflate", 00:05:27.939 "bdev_lvol_rename", 00:05:27.939 "bdev_lvol_clone_bdev", 00:05:27.939 "bdev_lvol_clone", 00:05:27.939 "bdev_lvol_snapshot", 00:05:27.939 "bdev_lvol_create", 00:05:27.939 "bdev_lvol_delete_lvstore", 00:05:27.939 "bdev_lvol_rename_lvstore", 00:05:27.939 "bdev_lvol_create_lvstore", 00:05:27.939 "bdev_raid_set_options", 00:05:27.939 "bdev_raid_remove_base_bdev", 00:05:27.939 "bdev_raid_add_base_bdev", 00:05:27.939 "bdev_raid_delete", 00:05:27.939 "bdev_raid_create", 00:05:27.939 "bdev_raid_get_bdevs", 00:05:27.939 "bdev_error_inject_error", 00:05:27.939 "bdev_error_delete", 00:05:27.939 "bdev_error_create", 00:05:27.939 "bdev_split_delete", 00:05:27.939 "bdev_split_create", 00:05:27.939 "bdev_delay_delete", 00:05:27.939 "bdev_delay_create", 00:05:27.939 "bdev_delay_update_latency", 00:05:27.939 "bdev_zone_block_delete", 00:05:27.939 "bdev_zone_block_create", 00:05:27.939 "blobfs_create", 00:05:27.939 "blobfs_detect", 00:05:27.939 "blobfs_set_cache_size", 00:05:27.939 "bdev_aio_delete", 00:05:27.939 "bdev_aio_rescan", 00:05:27.939 "bdev_aio_create", 00:05:27.939 "bdev_ftl_set_property", 00:05:27.939 "bdev_ftl_get_properties", 00:05:27.939 "bdev_ftl_get_stats", 00:05:27.939 "bdev_ftl_unmap", 00:05:27.939 "bdev_ftl_unload", 00:05:27.939 "bdev_ftl_delete", 00:05:27.939 "bdev_ftl_load", 00:05:27.939 "bdev_ftl_create", 00:05:27.939 "bdev_virtio_attach_controller", 00:05:27.939 "bdev_virtio_scsi_get_devices", 00:05:27.939 "bdev_virtio_detach_controller", 00:05:27.939 "bdev_virtio_blk_set_hotplug", 00:05:27.939 "bdev_iscsi_delete", 00:05:27.939 "bdev_iscsi_create", 00:05:27.939 "bdev_iscsi_set_options", 00:05:27.939 "bdev_uring_delete", 00:05:27.939 "bdev_uring_rescan", 00:05:27.939 "bdev_uring_create", 00:05:27.939 "accel_error_inject_error", 00:05:27.939 "ioat_scan_accel_module", 00:05:27.939 "dsa_scan_accel_module", 00:05:27.940 "iaa_scan_accel_module", 00:05:27.940 "keyring_file_remove_key", 00:05:27.940 "keyring_file_add_key", 00:05:27.940 "keyring_linux_set_options", 00:05:27.940 "fsdev_aio_delete", 00:05:27.940 "fsdev_aio_create", 00:05:27.940 "iscsi_get_histogram", 00:05:27.940 "iscsi_enable_histogram", 00:05:27.940 "iscsi_set_options", 00:05:27.940 "iscsi_get_auth_groups", 00:05:27.940 "iscsi_auth_group_remove_secret", 00:05:27.940 "iscsi_auth_group_add_secret", 00:05:27.940 "iscsi_delete_auth_group", 00:05:27.940 "iscsi_create_auth_group", 00:05:27.940 "iscsi_set_discovery_auth", 00:05:27.940 "iscsi_get_options", 00:05:27.940 "iscsi_target_node_request_logout", 00:05:27.940 "iscsi_target_node_set_redirect", 00:05:27.940 "iscsi_target_node_set_auth", 00:05:27.940 "iscsi_target_node_add_lun", 00:05:27.940 "iscsi_get_stats", 00:05:27.940 "iscsi_get_connections", 00:05:27.940 "iscsi_portal_group_set_auth", 00:05:27.940 "iscsi_start_portal_group", 00:05:27.940 "iscsi_delete_portal_group", 00:05:27.940 "iscsi_create_portal_group", 00:05:27.940 "iscsi_get_portal_groups", 00:05:27.940 "iscsi_delete_target_node", 00:05:27.940 "iscsi_target_node_remove_pg_ig_maps", 00:05:27.940 "iscsi_target_node_add_pg_ig_maps", 00:05:27.940 "iscsi_create_target_node", 00:05:27.940 "iscsi_get_target_nodes", 00:05:27.940 "iscsi_delete_initiator_group", 00:05:27.940 "iscsi_initiator_group_remove_initiators", 00:05:27.940 "iscsi_initiator_group_add_initiators", 00:05:27.940 "iscsi_create_initiator_group", 00:05:27.940 "iscsi_get_initiator_groups", 00:05:27.940 "nvmf_set_crdt", 00:05:27.940 "nvmf_set_config", 00:05:27.940 "nvmf_set_max_subsystems", 00:05:27.940 "nvmf_stop_mdns_prr", 00:05:27.940 "nvmf_publish_mdns_prr", 00:05:27.940 "nvmf_subsystem_get_listeners", 00:05:27.940 "nvmf_subsystem_get_qpairs", 00:05:27.940 "nvmf_subsystem_get_controllers", 00:05:27.940 "nvmf_get_stats", 00:05:27.940 "nvmf_get_transports", 00:05:27.940 "nvmf_create_transport", 00:05:27.940 "nvmf_get_targets", 00:05:27.940 "nvmf_delete_target", 00:05:27.940 "nvmf_create_target", 00:05:27.940 "nvmf_subsystem_allow_any_host", 00:05:27.940 "nvmf_subsystem_set_keys", 00:05:27.940 "nvmf_subsystem_remove_host", 00:05:27.940 "nvmf_subsystem_add_host", 00:05:27.940 "nvmf_ns_remove_host", 00:05:27.940 "nvmf_ns_add_host", 00:05:27.940 "nvmf_subsystem_remove_ns", 00:05:27.940 "nvmf_subsystem_set_ns_ana_group", 00:05:27.940 "nvmf_subsystem_add_ns", 00:05:27.940 "nvmf_subsystem_listener_set_ana_state", 00:05:27.940 "nvmf_discovery_get_referrals", 00:05:27.940 "nvmf_discovery_remove_referral", 00:05:27.940 "nvmf_discovery_add_referral", 00:05:27.940 "nvmf_subsystem_remove_listener", 00:05:27.940 "nvmf_subsystem_add_listener", 00:05:27.940 "nvmf_delete_subsystem", 00:05:27.940 "nvmf_create_subsystem", 00:05:27.940 "nvmf_get_subsystems", 00:05:27.940 "env_dpdk_get_mem_stats", 00:05:27.940 "nbd_get_disks", 00:05:27.940 "nbd_stop_disk", 00:05:27.940 "nbd_start_disk", 00:05:27.940 "ublk_recover_disk", 00:05:27.940 "ublk_get_disks", 00:05:27.940 "ublk_stop_disk", 00:05:27.940 "ublk_start_disk", 00:05:27.940 "ublk_destroy_target", 00:05:27.940 "ublk_create_target", 00:05:27.940 "virtio_blk_create_transport", 00:05:27.940 "virtio_blk_get_transports", 00:05:27.940 "vhost_controller_set_coalescing", 00:05:27.940 "vhost_get_controllers", 00:05:27.940 "vhost_delete_controller", 00:05:27.940 "vhost_create_blk_controller", 00:05:27.940 "vhost_scsi_controller_remove_target", 00:05:27.940 "vhost_scsi_controller_add_target", 00:05:27.940 "vhost_start_scsi_controller", 00:05:27.940 "vhost_create_scsi_controller", 00:05:27.940 "thread_set_cpumask", 00:05:27.940 "scheduler_set_options", 00:05:27.940 "framework_get_governor", 00:05:27.940 "framework_get_scheduler", 00:05:27.940 "framework_set_scheduler", 00:05:27.940 "framework_get_reactors", 00:05:27.940 "thread_get_io_channels", 00:05:27.940 "thread_get_pollers", 00:05:27.940 "thread_get_stats", 00:05:27.940 "framework_monitor_context_switch", 00:05:27.940 "spdk_kill_instance", 00:05:27.940 "log_enable_timestamps", 00:05:27.940 "log_get_flags", 00:05:27.940 "log_clear_flag", 00:05:27.940 "log_set_flag", 00:05:27.940 "log_get_level", 00:05:27.940 "log_set_level", 00:05:27.940 "log_get_print_level", 00:05:27.940 "log_set_print_level", 00:05:27.940 "framework_enable_cpumask_locks", 00:05:27.940 "framework_disable_cpumask_locks", 00:05:27.940 "framework_wait_init", 00:05:27.940 "framework_start_init", 00:05:27.940 "scsi_get_devices", 00:05:27.940 "bdev_get_histogram", 00:05:27.940 "bdev_enable_histogram", 00:05:27.940 "bdev_set_qos_limit", 00:05:27.940 "bdev_set_qd_sampling_period", 00:05:27.940 "bdev_get_bdevs", 00:05:27.940 "bdev_reset_iostat", 00:05:27.940 "bdev_get_iostat", 00:05:27.940 "bdev_examine", 00:05:27.940 "bdev_wait_for_examine", 00:05:27.940 "bdev_set_options", 00:05:27.940 "accel_get_stats", 00:05:27.940 "accel_set_options", 00:05:27.940 "accel_set_driver", 00:05:27.940 "accel_crypto_key_destroy", 00:05:27.940 "accel_crypto_keys_get", 00:05:27.940 "accel_crypto_key_create", 00:05:27.940 "accel_assign_opc", 00:05:27.940 "accel_get_module_info", 00:05:27.940 "accel_get_opc_assignments", 00:05:27.940 "vmd_rescan", 00:05:27.940 "vmd_remove_device", 00:05:27.940 "vmd_enable", 00:05:27.940 "sock_get_default_impl", 00:05:27.940 "sock_set_default_impl", 00:05:27.940 "sock_impl_set_options", 00:05:27.940 "sock_impl_get_options", 00:05:27.940 "iobuf_get_stats", 00:05:27.940 "iobuf_set_options", 00:05:27.940 "keyring_get_keys", 00:05:27.940 "framework_get_pci_devices", 00:05:27.940 "framework_get_config", 00:05:27.940 "framework_get_subsystems", 00:05:27.940 "fsdev_set_opts", 00:05:27.940 "fsdev_get_opts", 00:05:27.940 "trace_get_info", 00:05:27.940 "trace_get_tpoint_group_mask", 00:05:27.940 "trace_disable_tpoint_group", 00:05:27.940 "trace_enable_tpoint_group", 00:05:27.940 "trace_clear_tpoint_mask", 00:05:27.940 "trace_set_tpoint_mask", 00:05:27.940 "notify_get_notifications", 00:05:27.940 "notify_get_types", 00:05:27.940 "spdk_get_version", 00:05:27.940 "rpc_get_methods" 00:05:27.940 ] 00:05:27.940 13:46:20 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:27.940 13:46:20 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:27.940 13:46:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.940 13:46:20 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:27.940 13:46:20 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59083 00:05:27.940 13:46:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59083 ']' 00:05:27.940 13:46:20 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59083 00:05:27.940 13:46:20 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:27.940 13:46:20 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.940 13:46:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59083 00:05:27.940 13:46:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.940 13:46:20 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.940 killing process with pid 59083 00:05:27.940 13:46:20 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59083' 00:05:27.940 13:46:20 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59083 00:05:27.940 13:46:20 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59083 00:05:28.508 00:05:28.508 real 0m1.470s 00:05:28.508 user 0m2.508s 00:05:28.508 sys 0m0.461s 00:05:28.508 13:46:21 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.508 13:46:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.508 ************************************ 00:05:28.508 END TEST spdkcli_tcp 00:05:28.508 ************************************ 00:05:28.508 13:46:21 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:28.508 13:46:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.508 13:46:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.508 13:46:21 -- common/autotest_common.sh@10 -- # set +x 00:05:28.508 ************************************ 00:05:28.508 START TEST dpdk_mem_utility 00:05:28.508 ************************************ 00:05:28.508 13:46:21 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:28.508 * Looking for test storage... 00:05:28.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:28.508 13:46:21 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:28.508 13:46:21 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:28.508 13:46:21 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:28.508 13:46:21 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:28.508 13:46:21 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.508 13:46:21 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.508 13:46:21 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.508 13:46:21 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.508 13:46:21 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.766 13:46:21 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:28.766 13:46:21 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.766 13:46:21 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:28.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.767 --rc genhtml_branch_coverage=1 00:05:28.767 --rc genhtml_function_coverage=1 00:05:28.767 --rc genhtml_legend=1 00:05:28.767 --rc geninfo_all_blocks=1 00:05:28.767 --rc geninfo_unexecuted_blocks=1 00:05:28.767 00:05:28.767 ' 00:05:28.767 13:46:21 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:28.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.767 --rc genhtml_branch_coverage=1 00:05:28.767 --rc genhtml_function_coverage=1 00:05:28.767 --rc genhtml_legend=1 00:05:28.767 --rc geninfo_all_blocks=1 00:05:28.767 --rc geninfo_unexecuted_blocks=1 00:05:28.767 00:05:28.767 ' 00:05:28.767 13:46:21 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:28.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.767 --rc genhtml_branch_coverage=1 00:05:28.767 --rc genhtml_function_coverage=1 00:05:28.767 --rc genhtml_legend=1 00:05:28.767 --rc geninfo_all_blocks=1 00:05:28.767 --rc geninfo_unexecuted_blocks=1 00:05:28.767 00:05:28.767 ' 00:05:28.767 13:46:21 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:28.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.767 --rc genhtml_branch_coverage=1 00:05:28.767 --rc genhtml_function_coverage=1 00:05:28.767 --rc genhtml_legend=1 00:05:28.767 --rc geninfo_all_blocks=1 00:05:28.767 --rc geninfo_unexecuted_blocks=1 00:05:28.767 00:05:28.767 ' 00:05:28.767 13:46:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:28.767 13:46:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59169 00:05:28.767 13:46:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.767 13:46:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59169 00:05:28.767 13:46:21 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59169 ']' 00:05:28.767 13:46:21 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.767 13:46:21 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.767 13:46:21 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.767 13:46:21 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.767 13:46:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.767 [2024-12-11 13:46:21.631607] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:28.767 [2024-12-11 13:46:21.631723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59169 ] 00:05:28.767 [2024-12-11 13:46:21.781028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.026 [2024-12-11 13:46:21.834827] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.026 [2024-12-11 13:46:21.908037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:29.286 13:46:22 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.286 13:46:22 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:29.286 13:46:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:29.286 13:46:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:29.286 13:46:22 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.286 13:46:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:29.286 { 00:05:29.286 "filename": "/tmp/spdk_mem_dump.txt" 00:05:29.286 } 00:05:29.286 13:46:22 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.286 13:46:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:29.286 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:29.286 1 heaps totaling size 818.000000 MiB 00:05:29.286 size: 818.000000 MiB heap id: 0 00:05:29.286 end heaps---------- 00:05:29.286 9 mempools totaling size 603.782043 MiB 00:05:29.286 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:29.286 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:29.286 size: 100.555481 MiB name: bdev_io_59169 00:05:29.286 size: 50.003479 MiB name: msgpool_59169 00:05:29.286 size: 36.509338 MiB name: fsdev_io_59169 00:05:29.286 size: 21.763794 MiB name: PDU_Pool 00:05:29.286 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:29.286 size: 4.133484 MiB name: evtpool_59169 00:05:29.286 size: 0.026123 MiB name: Session_Pool 00:05:29.286 end mempools------- 00:05:29.286 6 memzones totaling size 4.142822 MiB 00:05:29.286 size: 1.000366 MiB name: RG_ring_0_59169 00:05:29.286 size: 1.000366 MiB name: RG_ring_1_59169 00:05:29.286 size: 1.000366 MiB name: RG_ring_4_59169 00:05:29.286 size: 1.000366 MiB name: RG_ring_5_59169 00:05:29.286 size: 0.125366 MiB name: RG_ring_2_59169 00:05:29.286 size: 0.015991 MiB name: RG_ring_3_59169 00:05:29.286 end memzones------- 00:05:29.286 13:46:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:29.286 heap id: 0 total size: 818.000000 MiB number of busy elements: 314 number of free elements: 15 00:05:29.286 list of free elements. size: 10.803040 MiB 00:05:29.286 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:29.286 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:29.286 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:29.286 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:29.286 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:29.286 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:29.286 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:29.286 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:29.286 element at address: 0x20001ae00000 with size: 0.567688 MiB 00:05:29.286 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:29.286 element at address: 0x200000c00000 with size: 0.486267 MiB 00:05:29.286 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:29.286 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:29.286 element at address: 0x200028200000 with size: 0.396301 MiB 00:05:29.286 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:29.286 list of standard malloc elements. size: 199.268066 MiB 00:05:29.286 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:29.286 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:29.286 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:29.286 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:29.286 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:29.286 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:29.286 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:29.286 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:29.286 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:29.286 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:29.286 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:29.286 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:29.286 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:29.286 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:29.286 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:29.286 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:29.286 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:29.286 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:29.286 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:29.286 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:29.287 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:29.287 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:05:29.287 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:29.288 element at address: 0x200028265740 with size: 0.000183 MiB 00:05:29.288 element at address: 0x200028265800 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826c400 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826c600 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826c780 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826c840 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826c900 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:29.288 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:29.289 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:29.289 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:29.289 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:29.289 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:29.289 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:29.289 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:29.289 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:29.289 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:29.289 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:29.289 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:29.289 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:29.289 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:29.289 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:29.289 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:29.289 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:29.289 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:29.289 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:29.289 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:29.289 list of memzone associated elements. size: 607.928894 MiB 00:05:29.289 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:29.289 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:29.289 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:29.289 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:29.289 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:29.289 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59169_0 00:05:29.289 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:29.289 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59169_0 00:05:29.289 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:29.289 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59169_0 00:05:29.289 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:29.289 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:29.289 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:29.289 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:29.289 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:29.289 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59169_0 00:05:29.289 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:29.289 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59169 00:05:29.289 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:29.289 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59169 00:05:29.289 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:29.289 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:29.289 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:29.289 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:29.289 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:29.289 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:29.289 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:29.289 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:29.289 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:29.289 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59169 00:05:29.289 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:29.289 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59169 00:05:29.289 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:29.289 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59169 00:05:29.289 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:29.289 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59169 00:05:29.289 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:29.289 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59169 00:05:29.289 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:29.289 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59169 00:05:29.289 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:29.289 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:29.289 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:29.289 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:29.289 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:29.289 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:29.289 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:29.289 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59169 00:05:29.289 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:29.289 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59169 00:05:29.289 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:29.289 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:29.289 element at address: 0x2000282658c0 with size: 0.023743 MiB 00:05:29.289 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:29.289 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:29.289 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59169 00:05:29.289 element at address: 0x20002826ba00 with size: 0.002441 MiB 00:05:29.289 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:29.289 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:29.289 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59169 00:05:29.289 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:29.289 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59169 00:05:29.289 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:29.289 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59169 00:05:29.289 element at address: 0x20002826c4c0 with size: 0.000305 MiB 00:05:29.289 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:29.289 13:46:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:29.289 13:46:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59169 00:05:29.289 13:46:22 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59169 ']' 00:05:29.289 13:46:22 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59169 00:05:29.289 13:46:22 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:29.289 13:46:22 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.289 13:46:22 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59169 00:05:29.289 13:46:22 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.289 13:46:22 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.289 killing process with pid 59169 00:05:29.289 13:46:22 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59169' 00:05:29.289 13:46:22 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59169 00:05:29.289 13:46:22 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59169 00:05:29.856 00:05:29.856 real 0m1.323s 00:05:29.856 user 0m1.296s 00:05:29.856 sys 0m0.414s 00:05:29.856 13:46:22 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.856 13:46:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:29.856 ************************************ 00:05:29.856 END TEST dpdk_mem_utility 00:05:29.856 ************************************ 00:05:29.856 13:46:22 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:29.856 13:46:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.856 13:46:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.857 13:46:22 -- common/autotest_common.sh@10 -- # set +x 00:05:29.857 ************************************ 00:05:29.857 START TEST event 00:05:29.857 ************************************ 00:05:29.857 13:46:22 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:29.857 * Looking for test storage... 00:05:29.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:29.857 13:46:22 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:29.857 13:46:22 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:29.857 13:46:22 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:30.116 13:46:22 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:30.116 13:46:22 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.116 13:46:22 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.116 13:46:22 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.116 13:46:22 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.116 13:46:22 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.116 13:46:22 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.116 13:46:22 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.116 13:46:22 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.116 13:46:22 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.116 13:46:22 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.116 13:46:22 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.116 13:46:22 event -- scripts/common.sh@344 -- # case "$op" in 00:05:30.116 13:46:22 event -- scripts/common.sh@345 -- # : 1 00:05:30.116 13:46:22 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.116 13:46:22 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.116 13:46:22 event -- scripts/common.sh@365 -- # decimal 1 00:05:30.116 13:46:22 event -- scripts/common.sh@353 -- # local d=1 00:05:30.116 13:46:22 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.116 13:46:22 event -- scripts/common.sh@355 -- # echo 1 00:05:30.116 13:46:22 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.116 13:46:22 event -- scripts/common.sh@366 -- # decimal 2 00:05:30.116 13:46:22 event -- scripts/common.sh@353 -- # local d=2 00:05:30.116 13:46:22 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.116 13:46:22 event -- scripts/common.sh@355 -- # echo 2 00:05:30.116 13:46:22 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.116 13:46:22 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.116 13:46:22 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.116 13:46:22 event -- scripts/common.sh@368 -- # return 0 00:05:30.116 13:46:22 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.116 13:46:22 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:30.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.116 --rc genhtml_branch_coverage=1 00:05:30.116 --rc genhtml_function_coverage=1 00:05:30.116 --rc genhtml_legend=1 00:05:30.116 --rc geninfo_all_blocks=1 00:05:30.116 --rc geninfo_unexecuted_blocks=1 00:05:30.116 00:05:30.116 ' 00:05:30.116 13:46:22 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:30.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.116 --rc genhtml_branch_coverage=1 00:05:30.116 --rc genhtml_function_coverage=1 00:05:30.116 --rc genhtml_legend=1 00:05:30.116 --rc geninfo_all_blocks=1 00:05:30.116 --rc geninfo_unexecuted_blocks=1 00:05:30.116 00:05:30.116 ' 00:05:30.116 13:46:22 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:30.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.116 --rc genhtml_branch_coverage=1 00:05:30.116 --rc genhtml_function_coverage=1 00:05:30.116 --rc genhtml_legend=1 00:05:30.116 --rc geninfo_all_blocks=1 00:05:30.116 --rc geninfo_unexecuted_blocks=1 00:05:30.116 00:05:30.116 ' 00:05:30.116 13:46:22 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:30.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.116 --rc genhtml_branch_coverage=1 00:05:30.116 --rc genhtml_function_coverage=1 00:05:30.116 --rc genhtml_legend=1 00:05:30.116 --rc geninfo_all_blocks=1 00:05:30.116 --rc geninfo_unexecuted_blocks=1 00:05:30.116 00:05:30.116 ' 00:05:30.116 13:46:22 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:30.116 13:46:22 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:30.116 13:46:22 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:30.116 13:46:22 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:30.116 13:46:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.116 13:46:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.116 ************************************ 00:05:30.116 START TEST event_perf 00:05:30.116 ************************************ 00:05:30.116 13:46:22 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:30.116 Running I/O for 1 seconds...[2024-12-11 13:46:22.948404] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:30.116 [2024-12-11 13:46:22.948594] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59246 ] 00:05:30.116 [2024-12-11 13:46:23.092034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:30.116 [2024-12-11 13:46:23.147567] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.116 [2024-12-11 13:46:23.147733] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.116 [2024-12-11 13:46:23.147851] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.116 Running I/O for 1 seconds...[2024-12-11 13:46:23.148025] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.492 00:05:31.492 lcore 0: 193000 00:05:31.492 lcore 1: 192999 00:05:31.492 lcore 2: 192996 00:05:31.492 lcore 3: 192997 00:05:31.492 done. 00:05:31.492 ************************************ 00:05:31.492 END TEST event_perf 00:05:31.492 ************************************ 00:05:31.492 00:05:31.492 real 0m1.274s 00:05:31.493 user 0m4.102s 00:05:31.493 sys 0m0.052s 00:05:31.493 13:46:24 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.493 13:46:24 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.493 13:46:24 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:31.493 13:46:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:31.493 13:46:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.493 13:46:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.493 ************************************ 00:05:31.493 START TEST event_reactor 00:05:31.493 ************************************ 00:05:31.493 13:46:24 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:31.493 [2024-12-11 13:46:24.270690] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:31.493 [2024-12-11 13:46:24.270949] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59285 ] 00:05:31.493 [2024-12-11 13:46:24.416671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.493 [2024-12-11 13:46:24.465023] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.869 test_start 00:05:32.869 oneshot 00:05:32.869 tick 100 00:05:32.869 tick 100 00:05:32.869 tick 250 00:05:32.869 tick 100 00:05:32.869 tick 100 00:05:32.869 tick 100 00:05:32.869 tick 250 00:05:32.869 tick 500 00:05:32.869 tick 100 00:05:32.869 tick 100 00:05:32.869 tick 250 00:05:32.869 tick 100 00:05:32.869 tick 100 00:05:32.869 test_end 00:05:32.869 00:05:32.869 real 0m1.256s 00:05:32.869 user 0m1.111s 00:05:32.869 sys 0m0.039s 00:05:32.869 ************************************ 00:05:32.869 END TEST event_reactor 00:05:32.869 ************************************ 00:05:32.869 13:46:25 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.869 13:46:25 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:32.869 13:46:25 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:32.869 13:46:25 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:32.869 13:46:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.869 13:46:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.869 ************************************ 00:05:32.869 START TEST event_reactor_perf 00:05:32.869 ************************************ 00:05:32.869 13:46:25 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:32.869 [2024-12-11 13:46:25.577893] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:32.869 [2024-12-11 13:46:25.577990] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59315 ] 00:05:32.869 [2024-12-11 13:46:25.725111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.869 [2024-12-11 13:46:25.769768] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.844 test_start 00:05:33.844 test_end 00:05:33.844 Performance: 394798 events per second 00:05:33.844 00:05:33.844 real 0m1.255s 00:05:33.844 user 0m1.106s 00:05:33.844 sys 0m0.044s 00:05:33.844 13:46:26 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.844 ************************************ 00:05:33.844 END TEST event_reactor_perf 00:05:33.844 ************************************ 00:05:33.844 13:46:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:33.844 13:46:26 event -- event/event.sh@49 -- # uname -s 00:05:33.844 13:46:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:33.844 13:46:26 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:33.844 13:46:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.844 13:46:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.844 13:46:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.844 ************************************ 00:05:33.844 START TEST event_scheduler 00:05:33.844 ************************************ 00:05:33.844 13:46:26 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:34.103 * Looking for test storage... 00:05:34.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:34.103 13:46:26 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:34.104 13:46:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:34.104 13:46:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:34.104 13:46:27 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.104 13:46:27 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:34.104 13:46:27 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.104 13:46:27 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:34.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.104 --rc genhtml_branch_coverage=1 00:05:34.104 --rc genhtml_function_coverage=1 00:05:34.104 --rc genhtml_legend=1 00:05:34.104 --rc geninfo_all_blocks=1 00:05:34.104 --rc geninfo_unexecuted_blocks=1 00:05:34.104 00:05:34.104 ' 00:05:34.104 13:46:27 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:34.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.104 --rc genhtml_branch_coverage=1 00:05:34.104 --rc genhtml_function_coverage=1 00:05:34.104 --rc genhtml_legend=1 00:05:34.104 --rc geninfo_all_blocks=1 00:05:34.104 --rc geninfo_unexecuted_blocks=1 00:05:34.104 00:05:34.104 ' 00:05:34.104 13:46:27 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:34.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.104 --rc genhtml_branch_coverage=1 00:05:34.104 --rc genhtml_function_coverage=1 00:05:34.104 --rc genhtml_legend=1 00:05:34.104 --rc geninfo_all_blocks=1 00:05:34.104 --rc geninfo_unexecuted_blocks=1 00:05:34.104 00:05:34.104 ' 00:05:34.104 13:46:27 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:34.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.104 --rc genhtml_branch_coverage=1 00:05:34.104 --rc genhtml_function_coverage=1 00:05:34.104 --rc genhtml_legend=1 00:05:34.104 --rc geninfo_all_blocks=1 00:05:34.104 --rc geninfo_unexecuted_blocks=1 00:05:34.104 00:05:34.104 ' 00:05:34.104 13:46:27 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:34.104 13:46:27 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59390 00:05:34.104 13:46:27 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:34.104 13:46:27 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.104 13:46:27 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59390 00:05:34.104 13:46:27 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59390 ']' 00:05:34.104 13:46:27 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.104 13:46:27 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.104 13:46:27 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.104 13:46:27 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.104 13:46:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.362 [2024-12-11 13:46:27.161880] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:34.362 [2024-12-11 13:46:27.162021] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59390 ] 00:05:34.362 [2024-12-11 13:46:27.314976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:34.362 [2024-12-11 13:46:27.385725] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.362 [2024-12-11 13:46:27.385814] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.362 [2024-12-11 13:46:27.385904] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.362 [2024-12-11 13:46:27.385910] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.297 13:46:28 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.297 13:46:28 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:35.297 13:46:28 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:35.297 13:46:28 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.297 13:46:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.297 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:35.297 POWER: Cannot set governor of lcore 0 to userspace 00:05:35.297 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:35.297 POWER: Cannot set governor of lcore 0 to performance 00:05:35.297 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:35.297 POWER: Cannot set governor of lcore 0 to userspace 00:05:35.297 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:35.297 POWER: Cannot set governor of lcore 0 to userspace 00:05:35.297 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:35.297 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:35.297 POWER: Unable to set Power Management Environment for lcore 0 00:05:35.297 [2024-12-11 13:46:28.160449] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:35.297 [2024-12-11 13:46:28.160547] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:35.297 [2024-12-11 13:46:28.160558] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:35.297 [2024-12-11 13:46:28.160571] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:35.297 [2024-12-11 13:46:28.160580] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:35.297 [2024-12-11 13:46:28.160587] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:35.297 13:46:28 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.298 13:46:28 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:35.298 13:46:28 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.298 13:46:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.298 [2024-12-11 13:46:28.225920] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:35.298 [2024-12-11 13:46:28.265508] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:35.298 13:46:28 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.298 13:46:28 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:35.298 13:46:28 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.298 13:46:28 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.298 13:46:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.298 ************************************ 00:05:35.298 START TEST scheduler_create_thread 00:05:35.298 ************************************ 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.298 2 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.298 3 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.298 4 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.298 5 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.298 6 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.298 7 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.298 8 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.298 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.556 9 00:05:35.556 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.556 13:46:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:35.556 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.556 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.556 10 00:05:35.557 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.557 13:46:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:35.557 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.557 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.557 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.557 13:46:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:35.557 13:46:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:35.557 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.557 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.557 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.557 13:46:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:35.557 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.557 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.557 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.557 13:46:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:35.557 13:46:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:35.557 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.557 13:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.490 ************************************ 00:05:36.490 END TEST scheduler_create_thread 00:05:36.490 ************************************ 00:05:36.491 13:46:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.491 00:05:36.491 real 0m1.170s 00:05:36.491 user 0m0.019s 00:05:36.491 sys 0m0.004s 00:05:36.491 13:46:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.491 13:46:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.491 13:46:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:36.491 13:46:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59390 00:05:36.491 13:46:29 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59390 ']' 00:05:36.491 13:46:29 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59390 00:05:36.491 13:46:29 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:36.491 13:46:29 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.491 13:46:29 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59390 00:05:36.491 killing process with pid 59390 00:05:36.491 13:46:29 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:36.491 13:46:29 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:36.491 13:46:29 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59390' 00:05:36.491 13:46:29 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59390 00:05:36.491 13:46:29 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59390 00:05:37.056 [2024-12-11 13:46:29.927961] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:37.314 ************************************ 00:05:37.314 END TEST event_scheduler 00:05:37.314 ************************************ 00:05:37.314 00:05:37.314 real 0m3.258s 00:05:37.314 user 0m5.884s 00:05:37.314 sys 0m0.373s 00:05:37.314 13:46:30 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.314 13:46:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.314 13:46:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:37.314 13:46:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:37.314 13:46:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.314 13:46:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.314 13:46:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.314 ************************************ 00:05:37.314 START TEST app_repeat 00:05:37.314 ************************************ 00:05:37.314 13:46:30 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:37.314 13:46:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.314 13:46:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.314 13:46:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:37.314 13:46:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.314 13:46:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:37.314 13:46:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:37.314 13:46:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:37.314 Process app_repeat pid: 59473 00:05:37.314 spdk_app_start Round 0 00:05:37.314 13:46:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59473 00:05:37.314 13:46:30 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:37.314 13:46:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.314 13:46:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59473' 00:05:37.314 13:46:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:37.314 13:46:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:37.314 13:46:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59473 /var/tmp/spdk-nbd.sock 00:05:37.314 13:46:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59473 ']' 00:05:37.314 13:46:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.314 13:46:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.314 13:46:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.314 13:46:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.314 13:46:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.314 [2024-12-11 13:46:30.208093] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:37.315 [2024-12-11 13:46:30.208509] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59473 ] 00:05:37.315 [2024-12-11 13:46:30.352686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.573 [2024-12-11 13:46:30.414638] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.573 [2024-12-11 13:46:30.414648] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.573 [2024-12-11 13:46:30.470776] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.573 13:46:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.573 13:46:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:37.573 13:46:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.831 Malloc0 00:05:37.831 13:46:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.090 Malloc1 00:05:38.348 13:46:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.348 13:46:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.348 13:46:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.348 13:46:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.348 13:46:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.348 13:46:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.348 13:46:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.348 13:46:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.348 13:46:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.348 13:46:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.348 13:46:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.348 13:46:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.348 13:46:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:38.348 13:46:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.348 13:46:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.348 13:46:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.607 /dev/nbd0 00:05:38.607 13:46:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.607 13:46:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.607 13:46:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:38.607 13:46:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:38.607 13:46:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.607 13:46:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.607 13:46:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:38.607 13:46:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:38.607 13:46:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.607 13:46:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.607 13:46:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.607 1+0 records in 00:05:38.607 1+0 records out 00:05:38.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213205 s, 19.2 MB/s 00:05:38.607 13:46:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.607 13:46:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:38.607 13:46:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.607 13:46:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.607 13:46:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:38.607 13:46:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.607 13:46:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.607 13:46:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.866 /dev/nbd1 00:05:38.866 13:46:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.866 13:46:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.866 13:46:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:38.866 13:46:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:38.866 13:46:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.866 13:46:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.866 13:46:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:38.866 13:46:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:38.866 13:46:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.866 13:46:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.866 13:46:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.866 1+0 records in 00:05:38.866 1+0 records out 00:05:38.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218142 s, 18.8 MB/s 00:05:38.866 13:46:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.866 13:46:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:38.866 13:46:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.866 13:46:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.866 13:46:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:38.866 13:46:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.866 13:46:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.866 13:46:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.866 13:46:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.866 13:46:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.127 { 00:05:39.127 "nbd_device": "/dev/nbd0", 00:05:39.127 "bdev_name": "Malloc0" 00:05:39.127 }, 00:05:39.127 { 00:05:39.127 "nbd_device": "/dev/nbd1", 00:05:39.127 "bdev_name": "Malloc1" 00:05:39.127 } 00:05:39.127 ]' 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.127 { 00:05:39.127 "nbd_device": "/dev/nbd0", 00:05:39.127 "bdev_name": "Malloc0" 00:05:39.127 }, 00:05:39.127 { 00:05:39.127 "nbd_device": "/dev/nbd1", 00:05:39.127 "bdev_name": "Malloc1" 00:05:39.127 } 00:05:39.127 ]' 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.127 /dev/nbd1' 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.127 /dev/nbd1' 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.127 256+0 records in 00:05:39.127 256+0 records out 00:05:39.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00959454 s, 109 MB/s 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.127 256+0 records in 00:05:39.127 256+0 records out 00:05:39.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255526 s, 41.0 MB/s 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.127 256+0 records in 00:05:39.127 256+0 records out 00:05:39.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248506 s, 42.2 MB/s 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.127 13:46:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:39.388 13:46:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.388 13:46:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:39.388 13:46:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.388 13:46:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.389 13:46:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:39.389 13:46:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:39.389 13:46:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.389 13:46:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.647 13:46:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.647 13:46:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.647 13:46:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.647 13:46:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.647 13:46:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.647 13:46:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.647 13:46:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.647 13:46:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.647 13:46:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.647 13:46:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.905 13:46:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.905 13:46:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.905 13:46:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.905 13:46:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.905 13:46:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.905 13:46:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.905 13:46:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.905 13:46:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.905 13:46:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.905 13:46:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.905 13:46:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.163 13:46:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.163 13:46:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.163 13:46:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.163 13:46:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.163 13:46:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.163 13:46:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.163 13:46:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:40.163 13:46:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.163 13:46:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.163 13:46:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.163 13:46:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.163 13:46:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.163 13:46:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.421 13:46:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:40.679 [2024-12-11 13:46:33.503137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.679 [2024-12-11 13:46:33.540694] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.679 [2024-12-11 13:46:33.540730] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.679 [2024-12-11 13:46:33.598783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.679 [2024-12-11 13:46:33.598889] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.679 [2024-12-11 13:46:33.598904] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.962 13:46:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.962 spdk_app_start Round 1 00:05:43.962 13:46:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:43.962 13:46:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59473 /var/tmp/spdk-nbd.sock 00:05:43.962 13:46:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59473 ']' 00:05:43.962 13:46:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.962 13:46:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.962 13:46:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.962 13:46:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.962 13:46:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.962 13:46:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.962 13:46:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:43.962 13:46:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.962 Malloc0 00:05:43.962 13:46:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.220 Malloc1 00:05:44.478 13:46:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.478 13:46:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.478 13:46:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.478 13:46:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.478 13:46:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.478 13:46:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.478 13:46:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.478 13:46:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.478 13:46:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.478 13:46:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.478 13:46:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.478 13:46:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.478 13:46:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:44.478 13:46:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.478 13:46:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.478 13:46:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.478 /dev/nbd0 00:05:44.737 13:46:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.737 13:46:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.737 13:46:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:44.737 13:46:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:44.737 13:46:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:44.737 13:46:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:44.737 13:46:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:44.737 13:46:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:44.737 13:46:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:44.737 13:46:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:44.737 13:46:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.737 1+0 records in 00:05:44.737 1+0 records out 00:05:44.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278322 s, 14.7 MB/s 00:05:44.737 13:46:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.737 13:46:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:44.737 13:46:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.737 13:46:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:44.737 13:46:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:44.737 13:46:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.737 13:46:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.737 13:46:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.995 /dev/nbd1 00:05:44.995 13:46:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.995 13:46:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.995 13:46:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:44.995 13:46:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:44.995 13:46:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:44.995 13:46:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:44.995 13:46:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:44.995 13:46:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:44.995 13:46:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:44.995 13:46:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:44.995 13:46:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.995 1+0 records in 00:05:44.995 1+0 records out 00:05:44.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262137 s, 15.6 MB/s 00:05:44.995 13:46:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.995 13:46:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:44.995 13:46:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.995 13:46:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:44.995 13:46:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:44.995 13:46:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.995 13:46:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.995 13:46:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.995 13:46:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.995 13:46:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.253 13:46:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:45.253 { 00:05:45.253 "nbd_device": "/dev/nbd0", 00:05:45.253 "bdev_name": "Malloc0" 00:05:45.253 }, 00:05:45.253 { 00:05:45.253 "nbd_device": "/dev/nbd1", 00:05:45.253 "bdev_name": "Malloc1" 00:05:45.253 } 00:05:45.253 ]' 00:05:45.253 13:46:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.253 { 00:05:45.253 "nbd_device": "/dev/nbd0", 00:05:45.253 "bdev_name": "Malloc0" 00:05:45.253 }, 00:05:45.253 { 00:05:45.253 "nbd_device": "/dev/nbd1", 00:05:45.253 "bdev_name": "Malloc1" 00:05:45.253 } 00:05:45.253 ]' 00:05:45.253 13:46:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.253 13:46:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.253 /dev/nbd1' 00:05:45.253 13:46:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.253 /dev/nbd1' 00:05:45.253 13:46:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.253 13:46:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.253 13:46:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.253 13:46:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.253 13:46:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.253 13:46:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.253 13:46:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.254 256+0 records in 00:05:45.254 256+0 records out 00:05:45.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00691564 s, 152 MB/s 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.254 256+0 records in 00:05:45.254 256+0 records out 00:05:45.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020257 s, 51.8 MB/s 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.254 256+0 records in 00:05:45.254 256+0 records out 00:05:45.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255268 s, 41.1 MB/s 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.254 13:46:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.820 13:46:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.820 13:46:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.820 13:46:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.820 13:46:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.820 13:46:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.820 13:46:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.820 13:46:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.820 13:46:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.820 13:46:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.820 13:46:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.078 13:46:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.078 13:46:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.078 13:46:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.078 13:46:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.078 13:46:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.078 13:46:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.078 13:46:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.078 13:46:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.078 13:46:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.078 13:46:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.078 13:46:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.336 13:46:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.336 13:46:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.336 13:46:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.336 13:46:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.336 13:46:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.336 13:46:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.336 13:46:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:46.336 13:46:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.336 13:46:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.336 13:46:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.336 13:46:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.336 13:46:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.336 13:46:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.902 13:46:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:46.902 [2024-12-11 13:46:39.834045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.902 [2024-12-11 13:46:39.872090] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.902 [2024-12-11 13:46:39.872097] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.902 [2024-12-11 13:46:39.930378] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.902 [2024-12-11 13:46:39.930503] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.902 [2024-12-11 13:46:39.930518] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.183 spdk_app_start Round 2 00:05:50.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.183 13:46:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.183 13:46:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:50.183 13:46:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59473 /var/tmp/spdk-nbd.sock 00:05:50.183 13:46:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59473 ']' 00:05:50.183 13:46:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.183 13:46:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.183 13:46:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.183 13:46:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.183 13:46:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.183 13:46:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.183 13:46:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:50.183 13:46:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.441 Malloc0 00:05:50.441 13:46:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.698 Malloc1 00:05:50.698 13:46:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.698 13:46:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.698 13:46:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.698 13:46:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.698 13:46:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.698 13:46:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.698 13:46:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.698 13:46:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.698 13:46:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.698 13:46:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.698 13:46:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.698 13:46:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.698 13:46:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:50.698 13:46:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.698 13:46:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.698 13:46:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.956 /dev/nbd0 00:05:50.956 13:46:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.956 13:46:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.956 13:46:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:50.956 13:46:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:50.956 13:46:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:50.956 13:46:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:50.956 13:46:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:50.956 13:46:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:50.956 13:46:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:50.956 13:46:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:50.956 13:46:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.956 1+0 records in 00:05:50.956 1+0 records out 00:05:50.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306818 s, 13.3 MB/s 00:05:50.956 13:46:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.956 13:46:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:50.956 13:46:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.956 13:46:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:50.956 13:46:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:50.956 13:46:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.956 13:46:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.956 13:46:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.213 /dev/nbd1 00:05:51.213 13:46:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.213 13:46:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.213 13:46:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:51.213 13:46:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:51.213 13:46:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.213 13:46:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.213 13:46:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:51.213 13:46:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:51.213 13:46:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.213 13:46:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.213 13:46:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.213 1+0 records in 00:05:51.213 1+0 records out 00:05:51.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252715 s, 16.2 MB/s 00:05:51.213 13:46:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.213 13:46:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:51.213 13:46:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.213 13:46:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.213 13:46:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:51.213 13:46:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.213 13:46:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.213 13:46:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.213 13:46:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.213 13:46:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.778 { 00:05:51.778 "nbd_device": "/dev/nbd0", 00:05:51.778 "bdev_name": "Malloc0" 00:05:51.778 }, 00:05:51.778 { 00:05:51.778 "nbd_device": "/dev/nbd1", 00:05:51.778 "bdev_name": "Malloc1" 00:05:51.778 } 00:05:51.778 ]' 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.778 { 00:05:51.778 "nbd_device": "/dev/nbd0", 00:05:51.778 "bdev_name": "Malloc0" 00:05:51.778 }, 00:05:51.778 { 00:05:51.778 "nbd_device": "/dev/nbd1", 00:05:51.778 "bdev_name": "Malloc1" 00:05:51.778 } 00:05:51.778 ]' 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.778 /dev/nbd1' 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.778 /dev/nbd1' 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.778 256+0 records in 00:05:51.778 256+0 records out 00:05:51.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104547 s, 100 MB/s 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.778 256+0 records in 00:05:51.778 256+0 records out 00:05:51.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228483 s, 45.9 MB/s 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.778 256+0 records in 00:05:51.778 256+0 records out 00:05:51.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256542 s, 40.9 MB/s 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.778 13:46:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.036 13:46:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.036 13:46:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.036 13:46:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.036 13:46:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.036 13:46:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.036 13:46:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.036 13:46:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.036 13:46:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.036 13:46:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.036 13:46:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.292 13:46:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.292 13:46:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.292 13:46:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.292 13:46:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.292 13:46:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.292 13:46:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.292 13:46:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.292 13:46:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.292 13:46:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.292 13:46:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.292 13:46:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.550 13:46:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.550 13:46:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.550 13:46:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.550 13:46:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.550 13:46:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.550 13:46:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.550 13:46:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.550 13:46:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.550 13:46:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.550 13:46:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.550 13:46:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.550 13:46:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.550 13:46:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:53.116 13:46:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:53.116 [2024-12-11 13:46:46.064797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.116 [2024-12-11 13:46:46.104210] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.116 [2024-12-11 13:46:46.104216] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.116 [2024-12-11 13:46:46.160551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.116 [2024-12-11 13:46:46.160699] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:53.116 [2024-12-11 13:46:46.160726] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.398 13:46:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59473 /var/tmp/spdk-nbd.sock 00:05:56.398 13:46:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59473 ']' 00:05:56.398 13:46:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.398 13:46:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.398 13:46:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.398 13:46:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.398 13:46:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.398 13:46:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.398 13:46:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:56.398 13:46:49 event.app_repeat -- event/event.sh@39 -- # killprocess 59473 00:05:56.398 13:46:49 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59473 ']' 00:05:56.398 13:46:49 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59473 00:05:56.398 13:46:49 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:56.398 13:46:49 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.398 13:46:49 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59473 00:05:56.398 killing process with pid 59473 00:05:56.398 13:46:49 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.398 13:46:49 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.398 13:46:49 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59473' 00:05:56.398 13:46:49 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59473 00:05:56.398 13:46:49 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59473 00:05:56.398 spdk_app_start is called in Round 0. 00:05:56.398 Shutdown signal received, stop current app iteration 00:05:56.398 Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 reinitialization... 00:05:56.398 spdk_app_start is called in Round 1. 00:05:56.398 Shutdown signal received, stop current app iteration 00:05:56.398 Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 reinitialization... 00:05:56.398 spdk_app_start is called in Round 2. 00:05:56.398 Shutdown signal received, stop current app iteration 00:05:56.398 Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 reinitialization... 00:05:56.398 spdk_app_start is called in Round 3. 00:05:56.398 Shutdown signal received, stop current app iteration 00:05:56.398 13:46:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:56.399 13:46:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:56.399 00:05:56.399 real 0m19.238s 00:05:56.399 user 0m44.045s 00:05:56.399 sys 0m2.936s 00:05:56.399 13:46:49 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.399 ************************************ 00:05:56.399 END TEST app_repeat 00:05:56.399 ************************************ 00:05:56.399 13:46:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.657 13:46:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:56.657 13:46:49 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:56.657 13:46:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.657 13:46:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.657 13:46:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.657 ************************************ 00:05:56.657 START TEST cpu_locks 00:05:56.657 ************************************ 00:05:56.657 13:46:49 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:56.657 * Looking for test storage... 00:05:56.658 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:56.658 13:46:49 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:56.658 13:46:49 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:56.658 13:46:49 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:56.658 13:46:49 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.658 13:46:49 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:56.658 13:46:49 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.658 13:46:49 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:56.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.658 --rc genhtml_branch_coverage=1 00:05:56.658 --rc genhtml_function_coverage=1 00:05:56.658 --rc genhtml_legend=1 00:05:56.658 --rc geninfo_all_blocks=1 00:05:56.658 --rc geninfo_unexecuted_blocks=1 00:05:56.658 00:05:56.658 ' 00:05:56.658 13:46:49 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:56.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.658 --rc genhtml_branch_coverage=1 00:05:56.658 --rc genhtml_function_coverage=1 00:05:56.658 --rc genhtml_legend=1 00:05:56.658 --rc geninfo_all_blocks=1 00:05:56.658 --rc geninfo_unexecuted_blocks=1 00:05:56.658 00:05:56.658 ' 00:05:56.658 13:46:49 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:56.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.658 --rc genhtml_branch_coverage=1 00:05:56.658 --rc genhtml_function_coverage=1 00:05:56.658 --rc genhtml_legend=1 00:05:56.658 --rc geninfo_all_blocks=1 00:05:56.658 --rc geninfo_unexecuted_blocks=1 00:05:56.658 00:05:56.658 ' 00:05:56.658 13:46:49 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:56.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.658 --rc genhtml_branch_coverage=1 00:05:56.658 --rc genhtml_function_coverage=1 00:05:56.658 --rc genhtml_legend=1 00:05:56.658 --rc geninfo_all_blocks=1 00:05:56.658 --rc geninfo_unexecuted_blocks=1 00:05:56.658 00:05:56.658 ' 00:05:56.658 13:46:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:56.658 13:46:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:56.658 13:46:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:56.658 13:46:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:56.658 13:46:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.658 13:46:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.658 13:46:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.658 ************************************ 00:05:56.658 START TEST default_locks 00:05:56.658 ************************************ 00:05:56.658 13:46:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:56.658 13:46:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59912 00:05:56.658 13:46:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59912 00:05:56.658 13:46:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.658 13:46:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59912 ']' 00:05:56.658 13:46:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.658 13:46:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.658 13:46:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.658 13:46:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.658 13:46:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.917 [2024-12-11 13:46:49.725434] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:56.917 [2024-12-11 13:46:49.725544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59912 ] 00:05:56.917 [2024-12-11 13:46:49.874253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.917 [2024-12-11 13:46:49.920368] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.175 [2024-12-11 13:46:49.992056] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.175 13:46:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.175 13:46:50 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:57.175 13:46:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59912 00:05:57.175 13:46:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59912 00:05:57.175 13:46:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.742 13:46:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59912 00:05:57.742 13:46:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59912 ']' 00:05:57.742 13:46:50 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59912 00:05:57.742 13:46:50 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:57.742 13:46:50 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.742 13:46:50 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59912 00:05:57.742 13:46:50 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.742 13:46:50 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.742 killing process with pid 59912 00:05:57.742 13:46:50 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59912' 00:05:57.742 13:46:50 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59912 00:05:57.742 13:46:50 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59912 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59912 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59912 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59912 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59912 ']' 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.309 ERROR: process (pid: 59912) is no longer running 00:05:58.309 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59912) - No such process 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:58.309 00:05:58.309 real 0m1.411s 00:05:58.309 user 0m1.349s 00:05:58.309 sys 0m0.558s 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.309 13:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.309 ************************************ 00:05:58.309 END TEST default_locks 00:05:58.309 ************************************ 00:05:58.309 13:46:51 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:58.309 13:46:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.309 13:46:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.309 13:46:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.309 ************************************ 00:05:58.309 START TEST default_locks_via_rpc 00:05:58.309 ************************************ 00:05:58.309 13:46:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:58.309 13:46:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59956 00:05:58.309 13:46:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59956 00:05:58.309 13:46:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.309 13:46:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59956 ']' 00:05:58.309 13:46:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.309 13:46:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.309 13:46:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.309 13:46:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.309 13:46:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.309 [2024-12-11 13:46:51.191993] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:58.309 [2024-12-11 13:46:51.192104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59956 ] 00:05:58.309 [2024-12-11 13:46:51.341099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.568 [2024-12-11 13:46:51.390409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.568 [2024-12-11 13:46:51.463045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.826 13:46:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.826 13:46:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:58.826 13:46:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:58.826 13:46:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.826 13:46:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.826 13:46:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.826 13:46:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:58.826 13:46:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:58.826 13:46:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:58.826 13:46:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:58.826 13:46:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:58.826 13:46:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.826 13:46:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.826 13:46:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.826 13:46:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59956 00:05:58.826 13:46:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59956 00:05:58.826 13:46:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.392 13:46:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59956 00:05:59.392 13:46:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59956 ']' 00:05:59.392 13:46:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59956 00:05:59.392 13:46:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:59.392 13:46:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.392 13:46:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59956 00:05:59.392 13:46:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.392 13:46:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.392 13:46:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59956' 00:05:59.392 killing process with pid 59956 00:05:59.392 13:46:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59956 00:05:59.392 13:46:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59956 00:05:59.650 00:05:59.650 real 0m1.441s 00:05:59.650 user 0m1.400s 00:05:59.650 sys 0m0.547s 00:05:59.650 13:46:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.650 13:46:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.650 ************************************ 00:05:59.650 END TEST default_locks_via_rpc 00:05:59.650 ************************************ 00:05:59.650 13:46:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:59.650 13:46:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.650 13:46:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.650 13:46:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.650 ************************************ 00:05:59.650 START TEST non_locking_app_on_locked_coremask 00:05:59.650 ************************************ 00:05:59.650 13:46:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:59.650 13:46:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59996 00:05:59.650 13:46:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59996 /var/tmp/spdk.sock 00:05:59.650 13:46:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.650 13:46:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59996 ']' 00:05:59.650 13:46:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.650 13:46:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.650 13:46:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.650 13:46:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.650 13:46:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.650 [2024-12-11 13:46:52.679501] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:59.650 [2024-12-11 13:46:52.679598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59996 ] 00:05:59.907 [2024-12-11 13:46:52.821792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.907 [2024-12-11 13:46:52.869244] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.907 [2024-12-11 13:46:52.940133] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.165 13:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.165 13:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:00.165 13:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60008 00:06:00.165 13:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:00.165 13:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60008 /var/tmp/spdk2.sock 00:06:00.165 13:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60008 ']' 00:06:00.165 13:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.165 13:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.165 13:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.165 13:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.165 13:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.165 [2024-12-11 13:46:53.202203] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:00.165 [2024-12-11 13:46:53.202302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60008 ] 00:06:00.423 [2024-12-11 13:46:53.362211] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.424 [2024-12-11 13:46:53.362260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.424 [2024-12-11 13:46:53.455543] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.682 [2024-12-11 13:46:53.594513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.248 13:46:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.248 13:46:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:01.248 13:46:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59996 00:06:01.248 13:46:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59996 00:06:01.248 13:46:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.184 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59996 00:06:02.184 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59996 ']' 00:06:02.184 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59996 00:06:02.184 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:02.184 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.184 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59996 00:06:02.184 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.184 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.184 killing process with pid 59996 00:06:02.184 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59996' 00:06:02.184 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59996 00:06:02.184 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59996 00:06:03.118 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60008 00:06:03.118 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60008 ']' 00:06:03.118 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60008 00:06:03.118 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:03.118 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.118 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60008 00:06:03.118 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.118 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.118 killing process with pid 60008 00:06:03.118 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60008' 00:06:03.118 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60008 00:06:03.118 13:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60008 00:06:03.376 00:06:03.376 real 0m3.676s 00:06:03.376 user 0m3.989s 00:06:03.376 sys 0m1.112s 00:06:03.376 13:46:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.376 13:46:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.376 ************************************ 00:06:03.376 END TEST non_locking_app_on_locked_coremask 00:06:03.376 ************************************ 00:06:03.376 13:46:56 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:03.376 13:46:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.376 13:46:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.376 13:46:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.376 ************************************ 00:06:03.376 START TEST locking_app_on_unlocked_coremask 00:06:03.376 ************************************ 00:06:03.376 13:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:03.376 13:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60075 00:06:03.376 13:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60075 /var/tmp/spdk.sock 00:06:03.376 13:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:03.376 13:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60075 ']' 00:06:03.376 13:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.376 13:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.377 13:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.377 13:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.377 13:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.377 [2024-12-11 13:46:56.410856] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:03.377 [2024-12-11 13:46:56.410951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60075 ] 00:06:03.635 [2024-12-11 13:46:56.551743] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.635 [2024-12-11 13:46:56.551785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.635 [2024-12-11 13:46:56.609533] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.635 [2024-12-11 13:46:56.679964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.893 13:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.893 13:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:03.893 13:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60084 00:06:03.893 13:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:03.893 13:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60084 /var/tmp/spdk2.sock 00:06:03.893 13:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60084 ']' 00:06:03.893 13:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.893 13:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.893 13:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.893 13:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.893 13:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.151 [2024-12-11 13:46:56.939519] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:04.151 [2024-12-11 13:46:56.939604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60084 ] 00:06:04.151 [2024-12-11 13:46:57.096091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.409 [2024-12-11 13:46:57.217078] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.409 [2024-12-11 13:46:57.361693] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.975 13:46:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.975 13:46:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:04.975 13:46:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60084 00:06:04.975 13:46:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60084 00:06:04.975 13:46:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.909 13:46:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60075 00:06:05.909 13:46:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60075 ']' 00:06:05.909 13:46:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60075 00:06:05.909 13:46:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:05.909 13:46:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.909 13:46:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60075 00:06:05.909 13:46:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.909 13:46:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.909 killing process with pid 60075 00:06:05.909 13:46:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60075' 00:06:05.909 13:46:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60075 00:06:05.909 13:46:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60075 00:06:06.477 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60084 00:06:06.477 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60084 ']' 00:06:06.477 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60084 00:06:06.477 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:06.477 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.477 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60084 00:06:06.477 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.477 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.477 killing process with pid 60084 00:06:06.477 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60084' 00:06:06.477 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60084 00:06:06.477 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60084 00:06:07.045 00:06:07.045 real 0m3.524s 00:06:07.045 user 0m3.778s 00:06:07.045 sys 0m1.063s 00:06:07.045 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.045 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.045 ************************************ 00:06:07.045 END TEST locking_app_on_unlocked_coremask 00:06:07.045 ************************************ 00:06:07.045 13:46:59 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:07.045 13:46:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.045 13:46:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.045 13:46:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.045 ************************************ 00:06:07.045 START TEST locking_app_on_locked_coremask 00:06:07.045 ************************************ 00:06:07.045 13:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:07.045 13:46:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60151 00:06:07.045 13:46:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60151 /var/tmp/spdk.sock 00:06:07.045 13:46:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.045 13:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60151 ']' 00:06:07.045 13:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.045 13:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.045 13:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.045 13:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.045 13:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.045 [2024-12-11 13:46:59.990287] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:07.045 [2024-12-11 13:46:59.990382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60151 ] 00:06:07.304 [2024-12-11 13:47:00.133927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.304 [2024-12-11 13:47:00.184791] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.304 [2024-12-11 13:47:00.257123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.239 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.239 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:08.239 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60167 00:06:08.239 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:08.239 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60167 /var/tmp/spdk2.sock 00:06:08.239 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:08.239 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60167 /var/tmp/spdk2.sock 00:06:08.239 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:08.239 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.239 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:08.239 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.239 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60167 /var/tmp/spdk2.sock 00:06:08.239 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60167 ']' 00:06:08.239 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.239 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.239 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.239 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.239 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.239 [2024-12-11 13:47:01.121326] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:08.239 [2024-12-11 13:47:01.121419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60167 ] 00:06:08.240 [2024-12-11 13:47:01.284767] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60151 has claimed it. 00:06:08.240 [2024-12-11 13:47:01.284813] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:08.806 ERROR: process (pid: 60167) is no longer running 00:06:08.806 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60167) - No such process 00:06:08.806 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.806 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:08.806 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:08.807 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.807 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:08.807 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.807 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60151 00:06:08.807 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60151 00:06:08.807 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.374 13:47:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60151 00:06:09.374 13:47:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60151 ']' 00:06:09.374 13:47:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60151 00:06:09.374 13:47:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:09.374 13:47:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.374 13:47:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60151 00:06:09.374 killing process with pid 60151 00:06:09.374 13:47:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.374 13:47:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.374 13:47:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60151' 00:06:09.374 13:47:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60151 00:06:09.374 13:47:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60151 00:06:09.633 00:06:09.633 real 0m2.695s 00:06:09.633 user 0m3.198s 00:06:09.633 sys 0m0.646s 00:06:09.633 13:47:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.633 13:47:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.633 ************************************ 00:06:09.633 END TEST locking_app_on_locked_coremask 00:06:09.633 ************************************ 00:06:09.633 13:47:02 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:09.633 13:47:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.633 13:47:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.633 13:47:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.633 ************************************ 00:06:09.633 START TEST locking_overlapped_coremask 00:06:09.633 ************************************ 00:06:09.633 13:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:09.633 13:47:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60212 00:06:09.633 13:47:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60212 /var/tmp/spdk.sock 00:06:09.633 13:47:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:09.633 13:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60212 ']' 00:06:09.633 13:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.633 13:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.633 13:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.633 13:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.633 13:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.891 [2024-12-11 13:47:02.740994] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:09.891 [2024-12-11 13:47:02.741129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60212 ] 00:06:09.891 [2024-12-11 13:47:02.888497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.149 [2024-12-11 13:47:02.937761] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.149 [2024-12-11 13:47:02.937899] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.149 [2024-12-11 13:47:02.937908] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.149 [2024-12-11 13:47:03.008342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.408 13:47:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.408 13:47:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:10.408 13:47:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60223 00:06:10.408 13:47:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60223 /var/tmp/spdk2.sock 00:06:10.408 13:47:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:10.408 13:47:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:10.408 13:47:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60223 /var/tmp/spdk2.sock 00:06:10.408 13:47:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:10.408 13:47:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.408 13:47:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:10.408 13:47:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.408 13:47:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60223 /var/tmp/spdk2.sock 00:06:10.408 13:47:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60223 ']' 00:06:10.408 13:47:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.408 13:47:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.408 13:47:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.408 13:47:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.408 13:47:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.408 [2024-12-11 13:47:03.282028] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:10.408 [2024-12-11 13:47:03.282118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60223 ] 00:06:10.408 [2024-12-11 13:47:03.445567] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60212 has claimed it. 00:06:10.408 [2024-12-11 13:47:03.445632] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:10.975 ERROR: process (pid: 60223) is no longer running 00:06:10.975 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60223) - No such process 00:06:10.975 13:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.975 13:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:10.975 13:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:10.975 13:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:10.975 13:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:10.976 13:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:10.976 13:47:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:10.976 13:47:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.976 13:47:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.976 13:47:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.976 13:47:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60212 00:06:10.976 13:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60212 ']' 00:06:10.976 13:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60212 00:06:10.976 13:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:10.976 13:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.976 13:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60212 00:06:11.234 killing process with pid 60212 00:06:11.234 13:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.234 13:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.234 13:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60212' 00:06:11.234 13:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60212 00:06:11.234 13:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60212 00:06:11.802 00:06:11.802 real 0m1.941s 00:06:11.802 user 0m5.244s 00:06:11.802 sys 0m0.458s 00:06:11.802 13:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.802 13:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.802 ************************************ 00:06:11.802 END TEST locking_overlapped_coremask 00:06:11.802 ************************************ 00:06:11.802 13:47:04 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:11.802 13:47:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.802 13:47:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.802 13:47:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.802 ************************************ 00:06:11.802 START TEST locking_overlapped_coremask_via_rpc 00:06:11.802 ************************************ 00:06:11.802 13:47:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:11.802 13:47:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60268 00:06:11.802 13:47:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:11.802 13:47:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60268 /var/tmp/spdk.sock 00:06:11.802 13:47:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60268 ']' 00:06:11.802 13:47:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.802 13:47:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.802 13:47:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.802 13:47:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.802 13:47:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.802 [2024-12-11 13:47:04.726668] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:11.802 [2024-12-11 13:47:04.726783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60268 ] 00:06:12.060 [2024-12-11 13:47:04.874874] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.060 [2024-12-11 13:47:04.874926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.061 [2024-12-11 13:47:04.937079] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.061 [2024-12-11 13:47:04.937201] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.061 [2024-12-11 13:47:04.937209] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.061 [2024-12-11 13:47:05.016285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.318 13:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.318 13:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:12.318 13:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:12.318 13:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60279 00:06:12.318 13:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60279 /var/tmp/spdk2.sock 00:06:12.318 13:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60279 ']' 00:06:12.318 13:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.318 13:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.318 13:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.318 13:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.318 13:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.318 [2024-12-11 13:47:05.280896] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:12.318 [2024-12-11 13:47:05.280980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60279 ] 00:06:12.574 [2024-12-11 13:47:05.442752] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.574 [2024-12-11 13:47:05.442806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.574 [2024-12-11 13:47:05.573320] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.574 [2024-12-11 13:47:05.573444] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.574 [2024-12-11 13:47:05.573444] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:12.832 [2024-12-11 13:47:05.712286] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.398 [2024-12-11 13:47:06.300820] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60268 has claimed it. 00:06:13.398 request: 00:06:13.398 { 00:06:13.398 "method": "framework_enable_cpumask_locks", 00:06:13.398 "req_id": 1 00:06:13.398 } 00:06:13.398 Got JSON-RPC error response 00:06:13.398 response: 00:06:13.398 { 00:06:13.398 "code": -32603, 00:06:13.398 "message": "Failed to claim CPU core: 2" 00:06:13.398 } 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60268 /var/tmp/spdk.sock 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60268 ']' 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.398 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.657 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.657 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.657 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60279 /var/tmp/spdk2.sock 00:06:13.657 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60279 ']' 00:06:13.657 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.657 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.657 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.657 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.657 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.915 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.915 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.915 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:13.915 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:13.915 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:13.915 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:13.915 00:06:13.915 real 0m2.178s 00:06:13.915 user 0m1.201s 00:06:13.915 sys 0m0.162s 00:06:13.915 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.915 ************************************ 00:06:13.915 END TEST locking_overlapped_coremask_via_rpc 00:06:13.915 ************************************ 00:06:13.915 13:47:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.915 13:47:06 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:13.915 13:47:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60268 ]] 00:06:13.915 13:47:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60268 00:06:13.915 13:47:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60268 ']' 00:06:13.915 13:47:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60268 00:06:13.915 13:47:06 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:13.915 13:47:06 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.915 13:47:06 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60268 00:06:13.915 killing process with pid 60268 00:06:13.915 13:47:06 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.915 13:47:06 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.915 13:47:06 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60268' 00:06:13.915 13:47:06 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60268 00:06:13.915 13:47:06 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60268 00:06:14.482 13:47:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60279 ]] 00:06:14.482 13:47:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60279 00:06:14.482 13:47:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60279 ']' 00:06:14.482 13:47:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60279 00:06:14.482 13:47:07 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:14.482 13:47:07 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.482 13:47:07 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60279 00:06:14.482 killing process with pid 60279 00:06:14.482 13:47:07 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:14.482 13:47:07 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:14.482 13:47:07 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60279' 00:06:14.482 13:47:07 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60279 00:06:14.482 13:47:07 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60279 00:06:14.740 13:47:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.740 13:47:07 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:14.740 Process with pid 60268 is not found 00:06:14.740 Process with pid 60279 is not found 00:06:14.740 13:47:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60268 ]] 00:06:14.740 13:47:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60268 00:06:14.740 13:47:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60268 ']' 00:06:14.740 13:47:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60268 00:06:14.740 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60268) - No such process 00:06:14.740 13:47:07 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60268 is not found' 00:06:14.740 13:47:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60279 ]] 00:06:14.740 13:47:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60279 00:06:14.740 13:47:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60279 ']' 00:06:14.740 13:47:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60279 00:06:14.740 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60279) - No such process 00:06:14.740 13:47:07 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60279 is not found' 00:06:14.740 13:47:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.740 ************************************ 00:06:14.740 END TEST cpu_locks 00:06:14.740 ************************************ 00:06:14.740 00:06:14.740 real 0m18.269s 00:06:14.740 user 0m31.629s 00:06:14.740 sys 0m5.448s 00:06:14.740 13:47:07 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.740 13:47:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.740 ************************************ 00:06:14.740 END TEST event 00:06:14.740 ************************************ 00:06:14.740 00:06:14.740 real 0m45.045s 00:06:14.740 user 1m28.082s 00:06:14.740 sys 0m9.165s 00:06:14.740 13:47:07 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.740 13:47:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.999 13:47:07 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:14.999 13:47:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.999 13:47:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.999 13:47:07 -- common/autotest_common.sh@10 -- # set +x 00:06:14.999 ************************************ 00:06:14.999 START TEST thread 00:06:14.999 ************************************ 00:06:14.999 13:47:07 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:14.999 * Looking for test storage... 00:06:14.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:14.999 13:47:07 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:14.999 13:47:07 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:14.999 13:47:07 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:14.999 13:47:07 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:14.999 13:47:07 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.999 13:47:07 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.999 13:47:07 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.999 13:47:07 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.999 13:47:07 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.999 13:47:07 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.999 13:47:07 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.999 13:47:07 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.999 13:47:07 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.999 13:47:07 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.999 13:47:07 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.999 13:47:07 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:14.999 13:47:07 thread -- scripts/common.sh@345 -- # : 1 00:06:14.999 13:47:07 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.999 13:47:07 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.999 13:47:07 thread -- scripts/common.sh@365 -- # decimal 1 00:06:14.999 13:47:08 thread -- scripts/common.sh@353 -- # local d=1 00:06:14.999 13:47:08 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.999 13:47:08 thread -- scripts/common.sh@355 -- # echo 1 00:06:14.999 13:47:08 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.999 13:47:08 thread -- scripts/common.sh@366 -- # decimal 2 00:06:14.999 13:47:08 thread -- scripts/common.sh@353 -- # local d=2 00:06:14.999 13:47:08 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.999 13:47:08 thread -- scripts/common.sh@355 -- # echo 2 00:06:14.999 13:47:08 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.999 13:47:08 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.999 13:47:08 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.999 13:47:08 thread -- scripts/common.sh@368 -- # return 0 00:06:14.999 13:47:08 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.999 13:47:08 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:14.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.999 --rc genhtml_branch_coverage=1 00:06:14.999 --rc genhtml_function_coverage=1 00:06:14.999 --rc genhtml_legend=1 00:06:14.999 --rc geninfo_all_blocks=1 00:06:14.999 --rc geninfo_unexecuted_blocks=1 00:06:14.999 00:06:14.999 ' 00:06:14.999 13:47:08 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:14.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.999 --rc genhtml_branch_coverage=1 00:06:14.999 --rc genhtml_function_coverage=1 00:06:14.999 --rc genhtml_legend=1 00:06:14.999 --rc geninfo_all_blocks=1 00:06:14.999 --rc geninfo_unexecuted_blocks=1 00:06:14.999 00:06:14.999 ' 00:06:14.999 13:47:08 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:14.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.999 --rc genhtml_branch_coverage=1 00:06:14.999 --rc genhtml_function_coverage=1 00:06:14.999 --rc genhtml_legend=1 00:06:14.999 --rc geninfo_all_blocks=1 00:06:14.999 --rc geninfo_unexecuted_blocks=1 00:06:14.999 00:06:14.999 ' 00:06:14.999 13:47:08 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:14.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.999 --rc genhtml_branch_coverage=1 00:06:14.999 --rc genhtml_function_coverage=1 00:06:14.999 --rc genhtml_legend=1 00:06:14.999 --rc geninfo_all_blocks=1 00:06:14.999 --rc geninfo_unexecuted_blocks=1 00:06:14.999 00:06:14.999 ' 00:06:14.999 13:47:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.999 13:47:08 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:14.999 13:47:08 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.999 13:47:08 thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.999 ************************************ 00:06:14.999 START TEST thread_poller_perf 00:06:14.999 ************************************ 00:06:14.999 13:47:08 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.999 [2024-12-11 13:47:08.041575] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:14.999 [2024-12-11 13:47:08.041849] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60415 ] 00:06:15.257 [2024-12-11 13:47:08.189796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.257 [2024-12-11 13:47:08.251026] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.257 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:16.632 [2024-12-11T13:47:09.679Z] ====================================== 00:06:16.632 [2024-12-11T13:47:09.679Z] busy:2209454474 (cyc) 00:06:16.632 [2024-12-11T13:47:09.679Z] total_run_count: 335000 00:06:16.632 [2024-12-11T13:47:09.679Z] tsc_hz: 2200000000 (cyc) 00:06:16.632 [2024-12-11T13:47:09.679Z] ====================================== 00:06:16.632 [2024-12-11T13:47:09.679Z] poller_cost: 6595 (cyc), 2997 (nsec) 00:06:16.632 00:06:16.632 real 0m1.289s 00:06:16.632 ************************************ 00:06:16.632 END TEST thread_poller_perf 00:06:16.632 ************************************ 00:06:16.632 user 0m1.132s 00:06:16.632 sys 0m0.048s 00:06:16.632 13:47:09 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.632 13:47:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.632 13:47:09 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.632 13:47:09 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:16.632 13:47:09 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.632 13:47:09 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.632 ************************************ 00:06:16.632 START TEST thread_poller_perf 00:06:16.632 ************************************ 00:06:16.632 13:47:09 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.632 [2024-12-11 13:47:09.378059] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:16.632 [2024-12-11 13:47:09.378170] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60445 ] 00:06:16.632 [2024-12-11 13:47:09.523693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.632 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:16.632 [2024-12-11 13:47:09.589712] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.007 [2024-12-11T13:47:11.054Z] ====================================== 00:06:18.007 [2024-12-11T13:47:11.054Z] busy:2201934237 (cyc) 00:06:18.007 [2024-12-11T13:47:11.054Z] total_run_count: 4024000 00:06:18.007 [2024-12-11T13:47:11.054Z] tsc_hz: 2200000000 (cyc) 00:06:18.007 [2024-12-11T13:47:11.054Z] ====================================== 00:06:18.007 [2024-12-11T13:47:11.054Z] poller_cost: 547 (cyc), 248 (nsec) 00:06:18.007 00:06:18.007 real 0m1.282s 00:06:18.007 user 0m1.129s 00:06:18.007 sys 0m0.043s 00:06:18.007 ************************************ 00:06:18.007 END TEST thread_poller_perf 00:06:18.007 ************************************ 00:06:18.007 13:47:10 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.007 13:47:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.007 13:47:10 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:18.007 ************************************ 00:06:18.007 END TEST thread 00:06:18.007 ************************************ 00:06:18.007 00:06:18.007 real 0m2.863s 00:06:18.007 user 0m2.407s 00:06:18.007 sys 0m0.232s 00:06:18.007 13:47:10 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.007 13:47:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.007 13:47:10 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:18.007 13:47:10 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:18.007 13:47:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.007 13:47:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.007 13:47:10 -- common/autotest_common.sh@10 -- # set +x 00:06:18.007 ************************************ 00:06:18.007 START TEST app_cmdline 00:06:18.007 ************************************ 00:06:18.007 13:47:10 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:18.007 * Looking for test storage... 00:06:18.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:18.007 13:47:10 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:18.007 13:47:10 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:18.007 13:47:10 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:18.007 13:47:10 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.007 13:47:10 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:18.007 13:47:10 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.007 13:47:10 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:18.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.007 --rc genhtml_branch_coverage=1 00:06:18.007 --rc genhtml_function_coverage=1 00:06:18.007 --rc genhtml_legend=1 00:06:18.007 --rc geninfo_all_blocks=1 00:06:18.007 --rc geninfo_unexecuted_blocks=1 00:06:18.007 00:06:18.007 ' 00:06:18.007 13:47:10 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:18.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.007 --rc genhtml_branch_coverage=1 00:06:18.007 --rc genhtml_function_coverage=1 00:06:18.007 --rc genhtml_legend=1 00:06:18.007 --rc geninfo_all_blocks=1 00:06:18.007 --rc geninfo_unexecuted_blocks=1 00:06:18.007 00:06:18.007 ' 00:06:18.007 13:47:10 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:18.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.007 --rc genhtml_branch_coverage=1 00:06:18.007 --rc genhtml_function_coverage=1 00:06:18.007 --rc genhtml_legend=1 00:06:18.007 --rc geninfo_all_blocks=1 00:06:18.007 --rc geninfo_unexecuted_blocks=1 00:06:18.007 00:06:18.007 ' 00:06:18.008 13:47:10 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:18.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.008 --rc genhtml_branch_coverage=1 00:06:18.008 --rc genhtml_function_coverage=1 00:06:18.008 --rc genhtml_legend=1 00:06:18.008 --rc geninfo_all_blocks=1 00:06:18.008 --rc geninfo_unexecuted_blocks=1 00:06:18.008 00:06:18.008 ' 00:06:18.008 13:47:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:18.008 13:47:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60527 00:06:18.008 13:47:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60527 00:06:18.008 13:47:10 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60527 ']' 00:06:18.008 13:47:10 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.008 13:47:10 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:18.008 13:47:10 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.008 13:47:10 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.008 13:47:10 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.008 13:47:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:18.008 [2024-12-11 13:47:11.004085] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:18.008 [2024-12-11 13:47:11.004513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60527 ] 00:06:18.282 [2024-12-11 13:47:11.146771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.282 [2024-12-11 13:47:11.210391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.282 [2024-12-11 13:47:11.286951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.559 13:47:11 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.559 13:47:11 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:18.559 13:47:11 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:18.817 { 00:06:18.817 "version": "SPDK v25.01-pre git sha1 4dfeb7f95", 00:06:18.817 "fields": { 00:06:18.817 "major": 25, 00:06:18.817 "minor": 1, 00:06:18.817 "patch": 0, 00:06:18.817 "suffix": "-pre", 00:06:18.817 "commit": "4dfeb7f95" 00:06:18.817 } 00:06:18.817 } 00:06:18.817 13:47:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:18.817 13:47:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:18.817 13:47:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:18.817 13:47:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:18.817 13:47:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:18.817 13:47:11 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.817 13:47:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:18.817 13:47:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:18.817 13:47:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:18.817 13:47:11 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.076 13:47:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:19.076 13:47:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:19.076 13:47:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:19.076 13:47:11 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:19.076 13:47:11 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:19.076 13:47:11 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:19.076 13:47:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.076 13:47:11 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:19.076 13:47:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.076 13:47:11 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:19.076 13:47:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.076 13:47:11 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:19.076 13:47:11 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:19.076 13:47:11 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:19.333 request: 00:06:19.333 { 00:06:19.333 "method": "env_dpdk_get_mem_stats", 00:06:19.333 "req_id": 1 00:06:19.333 } 00:06:19.333 Got JSON-RPC error response 00:06:19.333 response: 00:06:19.333 { 00:06:19.333 "code": -32601, 00:06:19.333 "message": "Method not found" 00:06:19.333 } 00:06:19.333 13:47:12 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:19.333 13:47:12 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.333 13:47:12 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:19.333 13:47:12 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.333 13:47:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60527 00:06:19.333 13:47:12 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60527 ']' 00:06:19.333 13:47:12 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60527 00:06:19.333 13:47:12 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:19.333 13:47:12 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.333 13:47:12 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60527 00:06:19.333 killing process with pid 60527 00:06:19.333 13:47:12 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.333 13:47:12 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.333 13:47:12 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60527' 00:06:19.333 13:47:12 app_cmdline -- common/autotest_common.sh@973 -- # kill 60527 00:06:19.333 13:47:12 app_cmdline -- common/autotest_common.sh@978 -- # wait 60527 00:06:19.590 ************************************ 00:06:19.590 END TEST app_cmdline 00:06:19.590 ************************************ 00:06:19.590 00:06:19.590 real 0m1.843s 00:06:19.590 user 0m2.231s 00:06:19.590 sys 0m0.488s 00:06:19.590 13:47:12 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.590 13:47:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:19.590 13:47:12 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:19.590 13:47:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.590 13:47:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.590 13:47:12 -- common/autotest_common.sh@10 -- # set +x 00:06:19.848 ************************************ 00:06:19.848 START TEST version 00:06:19.848 ************************************ 00:06:19.848 13:47:12 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:19.848 * Looking for test storage... 00:06:19.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:19.848 13:47:12 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:19.848 13:47:12 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:19.848 13:47:12 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:19.848 13:47:12 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:19.848 13:47:12 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.848 13:47:12 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.848 13:47:12 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.848 13:47:12 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.848 13:47:12 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.848 13:47:12 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.848 13:47:12 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.848 13:47:12 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.848 13:47:12 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.848 13:47:12 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.848 13:47:12 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.848 13:47:12 version -- scripts/common.sh@344 -- # case "$op" in 00:06:19.848 13:47:12 version -- scripts/common.sh@345 -- # : 1 00:06:19.848 13:47:12 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.848 13:47:12 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.848 13:47:12 version -- scripts/common.sh@365 -- # decimal 1 00:06:19.848 13:47:12 version -- scripts/common.sh@353 -- # local d=1 00:06:19.848 13:47:12 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.848 13:47:12 version -- scripts/common.sh@355 -- # echo 1 00:06:19.848 13:47:12 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.848 13:47:12 version -- scripts/common.sh@366 -- # decimal 2 00:06:19.848 13:47:12 version -- scripts/common.sh@353 -- # local d=2 00:06:19.848 13:47:12 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.848 13:47:12 version -- scripts/common.sh@355 -- # echo 2 00:06:19.848 13:47:12 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.848 13:47:12 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.848 13:47:12 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.848 13:47:12 version -- scripts/common.sh@368 -- # return 0 00:06:19.848 13:47:12 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.848 13:47:12 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:19.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.848 --rc genhtml_branch_coverage=1 00:06:19.848 --rc genhtml_function_coverage=1 00:06:19.848 --rc genhtml_legend=1 00:06:19.848 --rc geninfo_all_blocks=1 00:06:19.848 --rc geninfo_unexecuted_blocks=1 00:06:19.848 00:06:19.848 ' 00:06:19.848 13:47:12 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:19.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.848 --rc genhtml_branch_coverage=1 00:06:19.848 --rc genhtml_function_coverage=1 00:06:19.848 --rc genhtml_legend=1 00:06:19.848 --rc geninfo_all_blocks=1 00:06:19.848 --rc geninfo_unexecuted_blocks=1 00:06:19.848 00:06:19.848 ' 00:06:19.848 13:47:12 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:19.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.848 --rc genhtml_branch_coverage=1 00:06:19.848 --rc genhtml_function_coverage=1 00:06:19.848 --rc genhtml_legend=1 00:06:19.848 --rc geninfo_all_blocks=1 00:06:19.848 --rc geninfo_unexecuted_blocks=1 00:06:19.848 00:06:19.848 ' 00:06:19.848 13:47:12 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:19.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.848 --rc genhtml_branch_coverage=1 00:06:19.848 --rc genhtml_function_coverage=1 00:06:19.848 --rc genhtml_legend=1 00:06:19.848 --rc geninfo_all_blocks=1 00:06:19.848 --rc geninfo_unexecuted_blocks=1 00:06:19.848 00:06:19.848 ' 00:06:19.849 13:47:12 version -- app/version.sh@17 -- # get_header_version major 00:06:19.849 13:47:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.849 13:47:12 version -- app/version.sh@14 -- # cut -f2 00:06:19.849 13:47:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:19.849 13:47:12 version -- app/version.sh@17 -- # major=25 00:06:19.849 13:47:12 version -- app/version.sh@18 -- # get_header_version minor 00:06:19.849 13:47:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:19.849 13:47:12 version -- app/version.sh@14 -- # cut -f2 00:06:19.849 13:47:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.849 13:47:12 version -- app/version.sh@18 -- # minor=1 00:06:19.849 13:47:12 version -- app/version.sh@19 -- # get_header_version patch 00:06:19.849 13:47:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:19.849 13:47:12 version -- app/version.sh@14 -- # cut -f2 00:06:19.849 13:47:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.849 13:47:12 version -- app/version.sh@19 -- # patch=0 00:06:19.849 13:47:12 version -- app/version.sh@20 -- # get_header_version suffix 00:06:19.849 13:47:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:19.849 13:47:12 version -- app/version.sh@14 -- # cut -f2 00:06:19.849 13:47:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.849 13:47:12 version -- app/version.sh@20 -- # suffix=-pre 00:06:19.849 13:47:12 version -- app/version.sh@22 -- # version=25.1 00:06:19.849 13:47:12 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:19.849 13:47:12 version -- app/version.sh@28 -- # version=25.1rc0 00:06:19.849 13:47:12 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:19.849 13:47:12 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:19.849 13:47:12 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:19.849 13:47:12 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:19.849 00:06:19.849 real 0m0.253s 00:06:19.849 user 0m0.156s 00:06:19.849 sys 0m0.129s 00:06:19.849 ************************************ 00:06:19.849 END TEST version 00:06:19.849 ************************************ 00:06:19.849 13:47:12 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.849 13:47:12 version -- common/autotest_common.sh@10 -- # set +x 00:06:20.106 13:47:12 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:20.106 13:47:12 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:20.106 13:47:12 -- spdk/autotest.sh@194 -- # uname -s 00:06:20.106 13:47:12 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:20.106 13:47:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:20.106 13:47:12 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:06:20.106 13:47:12 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:06:20.106 13:47:12 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:20.106 13:47:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.106 13:47:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.106 13:47:12 -- common/autotest_common.sh@10 -- # set +x 00:06:20.106 ************************************ 00:06:20.106 START TEST spdk_dd 00:06:20.106 ************************************ 00:06:20.106 13:47:12 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:20.106 * Looking for test storage... 00:06:20.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:20.107 13:47:13 spdk_dd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:20.107 13:47:13 spdk_dd -- common/autotest_common.sh@1711 -- # lcov --version 00:06:20.107 13:47:13 spdk_dd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:20.107 13:47:13 spdk_dd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@345 -- # : 1 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@368 -- # return 0 00:06:20.107 13:47:13 spdk_dd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.107 13:47:13 spdk_dd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:20.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.107 --rc genhtml_branch_coverage=1 00:06:20.107 --rc genhtml_function_coverage=1 00:06:20.107 --rc genhtml_legend=1 00:06:20.107 --rc geninfo_all_blocks=1 00:06:20.107 --rc geninfo_unexecuted_blocks=1 00:06:20.107 00:06:20.107 ' 00:06:20.107 13:47:13 spdk_dd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:20.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.107 --rc genhtml_branch_coverage=1 00:06:20.107 --rc genhtml_function_coverage=1 00:06:20.107 --rc genhtml_legend=1 00:06:20.107 --rc geninfo_all_blocks=1 00:06:20.107 --rc geninfo_unexecuted_blocks=1 00:06:20.107 00:06:20.107 ' 00:06:20.107 13:47:13 spdk_dd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:20.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.107 --rc genhtml_branch_coverage=1 00:06:20.107 --rc genhtml_function_coverage=1 00:06:20.107 --rc genhtml_legend=1 00:06:20.107 --rc geninfo_all_blocks=1 00:06:20.107 --rc geninfo_unexecuted_blocks=1 00:06:20.107 00:06:20.107 ' 00:06:20.107 13:47:13 spdk_dd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:20.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.107 --rc genhtml_branch_coverage=1 00:06:20.107 --rc genhtml_function_coverage=1 00:06:20.107 --rc genhtml_legend=1 00:06:20.107 --rc geninfo_all_blocks=1 00:06:20.107 --rc geninfo_unexecuted_blocks=1 00:06:20.107 00:06:20.107 ' 00:06:20.107 13:47:13 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.107 13:47:13 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.365 13:47:13 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.365 13:47:13 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.365 13:47:13 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.365 13:47:13 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.365 13:47:13 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.365 13:47:13 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:20.366 13:47:13 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.366 13:47:13 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:20.624 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:20.624 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:20.624 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:20.624 13:47:13 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:20.624 13:47:13 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:20.624 13:47:13 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:06:20.624 13:47:13 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:06:20.624 13:47:13 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:06:20.624 13:47:13 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:20.624 13:47:13 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:06:20.624 13:47:13 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:06:20.624 13:47:13 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@233 -- # local class 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@235 -- # local progif 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@236 -- # class=01 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:06:20.625 13:47:13 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:20.625 13:47:13 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@139 -- # local lib 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:20.625 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.626 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:20.885 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.885 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:20.885 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.885 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:20.885 13:47:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:20.885 13:47:13 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:20.885 13:47:13 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:20.885 * spdk_dd linked to liburing 00:06:20.885 13:47:13 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:20.885 13:47:13 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:20.885 13:47:13 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:20.886 13:47:13 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:06:20.886 13:47:13 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:20.886 13:47:13 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:06:20.886 13:47:13 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:06:20.886 13:47:13 spdk_dd -- dd/common.sh@153 -- # return 0 00:06:20.886 13:47:13 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:20.886 13:47:13 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:20.886 13:47:13 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:20.886 13:47:13 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.886 13:47:13 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:20.886 ************************************ 00:06:20.886 START TEST spdk_dd_basic_rw 00:06:20.886 ************************************ 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:20.886 * Looking for test storage... 00:06:20.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lcov --version 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:20.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.886 --rc genhtml_branch_coverage=1 00:06:20.886 --rc genhtml_function_coverage=1 00:06:20.886 --rc genhtml_legend=1 00:06:20.886 --rc geninfo_all_blocks=1 00:06:20.886 --rc geninfo_unexecuted_blocks=1 00:06:20.886 00:06:20.886 ' 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:20.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.886 --rc genhtml_branch_coverage=1 00:06:20.886 --rc genhtml_function_coverage=1 00:06:20.886 --rc genhtml_legend=1 00:06:20.886 --rc geninfo_all_blocks=1 00:06:20.886 --rc geninfo_unexecuted_blocks=1 00:06:20.886 00:06:20.886 ' 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:20.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.886 --rc genhtml_branch_coverage=1 00:06:20.886 --rc genhtml_function_coverage=1 00:06:20.886 --rc genhtml_legend=1 00:06:20.886 --rc geninfo_all_blocks=1 00:06:20.886 --rc geninfo_unexecuted_blocks=1 00:06:20.886 00:06:20.886 ' 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:20.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.886 --rc genhtml_branch_coverage=1 00:06:20.886 --rc genhtml_function_coverage=1 00:06:20.886 --rc genhtml_legend=1 00:06:20.886 --rc geninfo_all_blocks=1 00:06:20.886 --rc geninfo_unexecuted_blocks=1 00:06:20.886 00:06:20.886 ' 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.886 13:47:13 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.887 13:47:13 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.887 13:47:13 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.887 13:47:13 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:20.887 13:47:13 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.887 13:47:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:20.887 13:47:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:20.887 13:47:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:20.887 13:47:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:20.887 13:47:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:20.887 13:47:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:20.887 13:47:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:20.887 13:47:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:20.887 13:47:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.887 13:47:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:20.887 13:47:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:20.887 13:47:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:20.887 13:47:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:21.147 13:47:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:21.147 13:47:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:21.148 ************************************ 00:06:21.148 START TEST dd_bs_lt_native_bs 00:06:21.148 ************************************ 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:21.148 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:21.148 { 00:06:21.148 "subsystems": [ 00:06:21.148 { 00:06:21.148 "subsystem": "bdev", 00:06:21.148 "config": [ 00:06:21.148 { 00:06:21.148 "params": { 00:06:21.148 "trtype": "pcie", 00:06:21.148 "traddr": "0000:00:10.0", 00:06:21.148 "name": "Nvme0" 00:06:21.148 }, 00:06:21.148 "method": "bdev_nvme_attach_controller" 00:06:21.148 }, 00:06:21.148 { 00:06:21.148 "method": "bdev_wait_for_examine" 00:06:21.148 } 00:06:21.148 ] 00:06:21.148 } 00:06:21.148 ] 00:06:21.148 } 00:06:21.148 [2024-12-11 13:47:14.171885] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:21.148 [2024-12-11 13:47:14.172006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60877 ] 00:06:21.406 [2024-12-11 13:47:14.327180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.406 [2024-12-11 13:47:14.400615] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.664 [2024-12-11 13:47:14.464875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.664 [2024-12-11 13:47:14.581310] spdk_dd.c:1159:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:21.664 [2024-12-11 13:47:14.581416] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.922 [2024-12-11 13:47:14.718036] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:21.922 00:06:21.922 real 0m0.682s 00:06:21.922 user 0m0.459s 00:06:21.922 sys 0m0.175s 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:21.922 ************************************ 00:06:21.922 END TEST dd_bs_lt_native_bs 00:06:21.922 ************************************ 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:21.922 ************************************ 00:06:21.922 START TEST dd_rw 00:06:21.922 ************************************ 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:21.922 13:47:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:22.487 13:47:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:22.487 13:47:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:22.487 13:47:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:22.487 13:47:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:22.745 [2024-12-11 13:47:15.543514] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:22.745 [2024-12-11 13:47:15.544121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60908 ] 00:06:22.745 { 00:06:22.745 "subsystems": [ 00:06:22.745 { 00:06:22.745 "subsystem": "bdev", 00:06:22.745 "config": [ 00:06:22.745 { 00:06:22.745 "params": { 00:06:22.745 "trtype": "pcie", 00:06:22.745 "traddr": "0000:00:10.0", 00:06:22.745 "name": "Nvme0" 00:06:22.745 }, 00:06:22.745 "method": "bdev_nvme_attach_controller" 00:06:22.745 }, 00:06:22.745 { 00:06:22.745 "method": "bdev_wait_for_examine" 00:06:22.745 } 00:06:22.745 ] 00:06:22.745 } 00:06:22.745 ] 00:06:22.745 } 00:06:22.745 [2024-12-11 13:47:15.693347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.745 [2024-12-11 13:47:15.756479] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.003 [2024-12-11 13:47:15.816013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.003  [2024-12-11T13:47:16.308Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:23.261 00:06:23.261 13:47:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:23.261 13:47:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:23.261 13:47:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:23.261 13:47:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:23.261 { 00:06:23.261 "subsystems": [ 00:06:23.261 { 00:06:23.261 "subsystem": "bdev", 00:06:23.261 "config": [ 00:06:23.261 { 00:06:23.261 "params": { 00:06:23.261 "trtype": "pcie", 00:06:23.261 "traddr": "0000:00:10.0", 00:06:23.261 "name": "Nvme0" 00:06:23.261 }, 00:06:23.261 "method": "bdev_nvme_attach_controller" 00:06:23.261 }, 00:06:23.261 { 00:06:23.261 "method": "bdev_wait_for_examine" 00:06:23.261 } 00:06:23.261 ] 00:06:23.261 } 00:06:23.261 ] 00:06:23.261 } 00:06:23.261 [2024-12-11 13:47:16.190448] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:23.261 [2024-12-11 13:47:16.190585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60921 ] 00:06:23.519 [2024-12-11 13:47:16.338519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.519 [2024-12-11 13:47:16.394016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.519 [2024-12-11 13:47:16.453384] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.822  [2024-12-11T13:47:16.869Z] Copying: 60/60 [kB] (average 14 MBps) 00:06:23.822 00:06:23.822 13:47:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.822 13:47:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:23.822 13:47:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:23.822 13:47:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:23.822 13:47:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:23.822 13:47:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:23.822 13:47:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:23.822 13:47:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:23.822 13:47:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:23.822 13:47:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:23.822 13:47:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:23.822 { 00:06:23.822 "subsystems": [ 00:06:23.822 { 00:06:23.822 "subsystem": "bdev", 00:06:23.822 "config": [ 00:06:23.822 { 00:06:23.822 "params": { 00:06:23.822 "trtype": "pcie", 00:06:23.822 "traddr": "0000:00:10.0", 00:06:23.822 "name": "Nvme0" 00:06:23.822 }, 00:06:23.822 "method": "bdev_nvme_attach_controller" 00:06:23.822 }, 00:06:23.822 { 00:06:23.822 "method": "bdev_wait_for_examine" 00:06:23.822 } 00:06:23.822 ] 00:06:23.822 } 00:06:23.822 ] 00:06:23.822 } 00:06:23.822 [2024-12-11 13:47:16.843833] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:23.822 [2024-12-11 13:47:16.843935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60937 ] 00:06:24.080 [2024-12-11 13:47:16.993214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.080 [2024-12-11 13:47:17.060816] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.080 [2024-12-11 13:47:17.119075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.338  [2024-12-11T13:47:17.644Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:24.597 00:06:24.597 13:47:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:24.597 13:47:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:24.597 13:47:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:24.597 13:47:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:24.597 13:47:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:24.597 13:47:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:24.597 13:47:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:25.164 13:47:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:25.164 13:47:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:25.164 13:47:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:25.164 13:47:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:25.164 { 00:06:25.164 "subsystems": [ 00:06:25.164 { 00:06:25.164 "subsystem": "bdev", 00:06:25.164 "config": [ 00:06:25.164 { 00:06:25.164 "params": { 00:06:25.164 "trtype": "pcie", 00:06:25.164 "traddr": "0000:00:10.0", 00:06:25.164 "name": "Nvme0" 00:06:25.164 }, 00:06:25.164 "method": "bdev_nvme_attach_controller" 00:06:25.164 }, 00:06:25.164 { 00:06:25.164 "method": "bdev_wait_for_examine" 00:06:25.164 } 00:06:25.164 ] 00:06:25.164 } 00:06:25.164 ] 00:06:25.164 } 00:06:25.164 [2024-12-11 13:47:18.159639] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:25.164 [2024-12-11 13:47:18.159785] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60961 ] 00:06:25.422 [2024-12-11 13:47:18.307393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.422 [2024-12-11 13:47:18.395547] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.422 [2024-12-11 13:47:18.461147] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.680  [2024-12-11T13:47:18.985Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:25.938 00:06:25.938 13:47:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:25.938 13:47:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:25.938 13:47:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:25.938 13:47:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:25.938 { 00:06:25.938 "subsystems": [ 00:06:25.938 { 00:06:25.938 "subsystem": "bdev", 00:06:25.938 "config": [ 00:06:25.938 { 00:06:25.938 "params": { 00:06:25.938 "trtype": "pcie", 00:06:25.938 "traddr": "0000:00:10.0", 00:06:25.938 "name": "Nvme0" 00:06:25.938 }, 00:06:25.938 "method": "bdev_nvme_attach_controller" 00:06:25.938 }, 00:06:25.938 { 00:06:25.938 "method": "bdev_wait_for_examine" 00:06:25.938 } 00:06:25.938 ] 00:06:25.938 } 00:06:25.938 ] 00:06:25.938 } 00:06:25.938 [2024-12-11 13:47:18.839589] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:25.938 [2024-12-11 13:47:18.839745] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60975 ] 00:06:26.197 [2024-12-11 13:47:18.990966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.197 [2024-12-11 13:47:19.057334] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.197 [2024-12-11 13:47:19.112365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.197  [2024-12-11T13:47:19.502Z] Copying: 60/60 [kB] (average 29 MBps) 00:06:26.455 00:06:26.455 13:47:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.455 13:47:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:26.455 13:47:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:26.455 13:47:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:26.455 13:47:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:26.455 13:47:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:26.455 13:47:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:26.455 13:47:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:26.455 13:47:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:26.455 13:47:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:26.455 13:47:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:26.455 { 00:06:26.455 "subsystems": [ 00:06:26.455 { 00:06:26.455 "subsystem": "bdev", 00:06:26.455 "config": [ 00:06:26.455 { 00:06:26.455 "params": { 00:06:26.455 "trtype": "pcie", 00:06:26.455 "traddr": "0000:00:10.0", 00:06:26.455 "name": "Nvme0" 00:06:26.455 }, 00:06:26.455 "method": "bdev_nvme_attach_controller" 00:06:26.455 }, 00:06:26.455 { 00:06:26.455 "method": "bdev_wait_for_examine" 00:06:26.455 } 00:06:26.455 ] 00:06:26.455 } 00:06:26.455 ] 00:06:26.455 } 00:06:26.455 [2024-12-11 13:47:19.494759] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:26.455 [2024-12-11 13:47:19.495063] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60996 ] 00:06:26.713 [2024-12-11 13:47:19.639668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.713 [2024-12-11 13:47:19.700651] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.713 [2024-12-11 13:47:19.756307] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.972  [2024-12-11T13:47:20.277Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:27.230 00:06:27.230 13:47:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:27.230 13:47:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:27.230 13:47:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:27.230 13:47:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:27.230 13:47:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:27.230 13:47:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:27.230 13:47:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:27.230 13:47:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:27.798 13:47:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:27.798 13:47:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:27.798 13:47:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:27.798 13:47:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:27.798 [2024-12-11 13:47:20.740582] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:27.798 [2024-12-11 13:47:20.741031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61015 ] 00:06:27.798 { 00:06:27.798 "subsystems": [ 00:06:27.798 { 00:06:27.798 "subsystem": "bdev", 00:06:27.798 "config": [ 00:06:27.798 { 00:06:27.798 "params": { 00:06:27.798 "trtype": "pcie", 00:06:27.798 "traddr": "0000:00:10.0", 00:06:27.798 "name": "Nvme0" 00:06:27.798 }, 00:06:27.798 "method": "bdev_nvme_attach_controller" 00:06:27.798 }, 00:06:27.798 { 00:06:27.798 "method": "bdev_wait_for_examine" 00:06:27.798 } 00:06:27.798 ] 00:06:27.798 } 00:06:27.798 ] 00:06:27.798 } 00:06:28.057 [2024-12-11 13:47:20.887194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.057 [2024-12-11 13:47:20.949294] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.057 [2024-12-11 13:47:21.004712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.315  [2024-12-11T13:47:21.362Z] Copying: 56/56 [kB] (average 27 MBps) 00:06:28.315 00:06:28.315 13:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:28.315 13:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:28.315 13:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:28.315 13:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:28.574 { 00:06:28.574 "subsystems": [ 00:06:28.574 { 00:06:28.574 "subsystem": "bdev", 00:06:28.574 "config": [ 00:06:28.574 { 00:06:28.574 "params": { 00:06:28.574 "trtype": "pcie", 00:06:28.574 "traddr": "0000:00:10.0", 00:06:28.574 "name": "Nvme0" 00:06:28.574 }, 00:06:28.574 "method": "bdev_nvme_attach_controller" 00:06:28.574 }, 00:06:28.574 { 00:06:28.574 "method": "bdev_wait_for_examine" 00:06:28.574 } 00:06:28.574 ] 00:06:28.574 } 00:06:28.574 ] 00:06:28.574 } 00:06:28.574 [2024-12-11 13:47:21.385292] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:28.574 [2024-12-11 13:47:21.385411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61034 ] 00:06:28.574 [2024-12-11 13:47:21.535365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.574 [2024-12-11 13:47:21.599985] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.862 [2024-12-11 13:47:21.655110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.862  [2024-12-11T13:47:22.188Z] Copying: 56/56 [kB] (average 27 MBps) 00:06:29.141 00:06:29.141 13:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.141 13:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:29.141 13:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:29.141 13:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:29.141 13:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:29.141 13:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:29.141 13:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:29.141 13:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:29.141 13:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:29.141 13:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:29.141 13:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:29.141 [2024-12-11 13:47:22.023977] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:29.141 [2024-12-11 13:47:22.024326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61044 ] 00:06:29.141 { 00:06:29.141 "subsystems": [ 00:06:29.141 { 00:06:29.141 "subsystem": "bdev", 00:06:29.141 "config": [ 00:06:29.141 { 00:06:29.141 "params": { 00:06:29.141 "trtype": "pcie", 00:06:29.141 "traddr": "0000:00:10.0", 00:06:29.141 "name": "Nvme0" 00:06:29.141 }, 00:06:29.141 "method": "bdev_nvme_attach_controller" 00:06:29.141 }, 00:06:29.141 { 00:06:29.141 "method": "bdev_wait_for_examine" 00:06:29.141 } 00:06:29.141 ] 00:06:29.141 } 00:06:29.141 ] 00:06:29.141 } 00:06:29.141 [2024-12-11 13:47:22.166565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.399 [2024-12-11 13:47:22.228593] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.399 [2024-12-11 13:47:22.285902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.399  [2024-12-11T13:47:22.705Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:29.658 00:06:29.658 13:47:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:29.658 13:47:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:29.658 13:47:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:29.658 13:47:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:29.658 13:47:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:29.658 13:47:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:29.658 13:47:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.226 13:47:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:30.226 13:47:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:30.226 13:47:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:30.226 13:47:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.226 [2024-12-11 13:47:23.198155] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:30.226 [2024-12-11 13:47:23.198261] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61068 ] 00:06:30.226 { 00:06:30.226 "subsystems": [ 00:06:30.226 { 00:06:30.226 "subsystem": "bdev", 00:06:30.226 "config": [ 00:06:30.226 { 00:06:30.226 "params": { 00:06:30.226 "trtype": "pcie", 00:06:30.226 "traddr": "0000:00:10.0", 00:06:30.226 "name": "Nvme0" 00:06:30.226 }, 00:06:30.226 "method": "bdev_nvme_attach_controller" 00:06:30.226 }, 00:06:30.226 { 00:06:30.226 "method": "bdev_wait_for_examine" 00:06:30.226 } 00:06:30.226 ] 00:06:30.226 } 00:06:30.226 ] 00:06:30.226 } 00:06:30.485 [2024-12-11 13:47:23.342665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.485 [2024-12-11 13:47:23.407844] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.485 [2024-12-11 13:47:23.465109] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.744  [2024-12-11T13:47:23.791Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:30.744 00:06:30.744 13:47:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:30.744 13:47:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:30.744 13:47:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:30.744 13:47:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:31.003 [2024-12-11 13:47:23.834457] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:31.003 [2024-12-11 13:47:23.834596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61082 ] 00:06:31.003 { 00:06:31.003 "subsystems": [ 00:06:31.003 { 00:06:31.003 "subsystem": "bdev", 00:06:31.003 "config": [ 00:06:31.003 { 00:06:31.003 "params": { 00:06:31.003 "trtype": "pcie", 00:06:31.003 "traddr": "0000:00:10.0", 00:06:31.003 "name": "Nvme0" 00:06:31.003 }, 00:06:31.003 "method": "bdev_nvme_attach_controller" 00:06:31.003 }, 00:06:31.003 { 00:06:31.003 "method": "bdev_wait_for_examine" 00:06:31.003 } 00:06:31.003 ] 00:06:31.003 } 00:06:31.003 ] 00:06:31.003 } 00:06:31.003 [2024-12-11 13:47:23.980073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.003 [2024-12-11 13:47:24.046340] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.262 [2024-12-11 13:47:24.104767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.262  [2024-12-11T13:47:24.568Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:31.521 00:06:31.521 13:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:31.521 13:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:31.521 13:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:31.521 13:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:31.521 13:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:31.521 13:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:31.521 13:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:31.521 13:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:31.521 13:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:31.521 13:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:31.521 13:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:31.521 [2024-12-11 13:47:24.481417] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:31.521 [2024-12-11 13:47:24.481896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61103 ] 00:06:31.521 { 00:06:31.521 "subsystems": [ 00:06:31.521 { 00:06:31.521 "subsystem": "bdev", 00:06:31.521 "config": [ 00:06:31.521 { 00:06:31.521 "params": { 00:06:31.521 "trtype": "pcie", 00:06:31.521 "traddr": "0000:00:10.0", 00:06:31.521 "name": "Nvme0" 00:06:31.521 }, 00:06:31.521 "method": "bdev_nvme_attach_controller" 00:06:31.521 }, 00:06:31.521 { 00:06:31.521 "method": "bdev_wait_for_examine" 00:06:31.521 } 00:06:31.521 ] 00:06:31.521 } 00:06:31.521 ] 00:06:31.521 } 00:06:31.781 [2024-12-11 13:47:24.630644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.781 [2024-12-11 13:47:24.692445] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.781 [2024-12-11 13:47:24.746514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.039  [2024-12-11T13:47:25.086Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:32.039 00:06:32.039 13:47:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:32.039 13:47:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:32.039 13:47:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:32.039 13:47:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:32.039 13:47:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:32.039 13:47:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:32.039 13:47:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:32.039 13:47:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:32.606 13:47:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:32.606 13:47:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:32.606 13:47:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:32.606 13:47:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:32.606 { 00:06:32.606 "subsystems": [ 00:06:32.606 { 00:06:32.606 "subsystem": "bdev", 00:06:32.606 "config": [ 00:06:32.606 { 00:06:32.606 "params": { 00:06:32.606 "trtype": "pcie", 00:06:32.606 "traddr": "0000:00:10.0", 00:06:32.606 "name": "Nvme0" 00:06:32.606 }, 00:06:32.606 "method": "bdev_nvme_attach_controller" 00:06:32.606 }, 00:06:32.606 { 00:06:32.606 "method": "bdev_wait_for_examine" 00:06:32.606 } 00:06:32.606 ] 00:06:32.606 } 00:06:32.606 ] 00:06:32.606 } 00:06:32.606 [2024-12-11 13:47:25.622419] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:32.606 [2024-12-11 13:47:25.622530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61122 ] 00:06:32.865 [2024-12-11 13:47:25.770207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.865 [2024-12-11 13:47:25.830513] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.865 [2024-12-11 13:47:25.885907] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.123  [2024-12-11T13:47:26.429Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:33.382 00:06:33.382 13:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:33.382 13:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:33.382 13:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:33.382 13:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:33.382 { 00:06:33.382 "subsystems": [ 00:06:33.382 { 00:06:33.382 "subsystem": "bdev", 00:06:33.382 "config": [ 00:06:33.382 { 00:06:33.382 "params": { 00:06:33.382 "trtype": "pcie", 00:06:33.382 "traddr": "0000:00:10.0", 00:06:33.382 "name": "Nvme0" 00:06:33.382 }, 00:06:33.382 "method": "bdev_nvme_attach_controller" 00:06:33.382 }, 00:06:33.382 { 00:06:33.382 "method": "bdev_wait_for_examine" 00:06:33.382 } 00:06:33.382 ] 00:06:33.382 } 00:06:33.382 ] 00:06:33.382 } 00:06:33.382 [2024-12-11 13:47:26.257874] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:33.382 [2024-12-11 13:47:26.257974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61130 ] 00:06:33.383 [2024-12-11 13:47:26.402496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.709 [2024-12-11 13:47:26.463351] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.709 [2024-12-11 13:47:26.517504] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.709  [2024-12-11T13:47:27.016Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:33.969 00:06:33.969 13:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:33.969 13:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:33.969 13:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:33.969 13:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:33.969 13:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:33.969 13:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:33.969 13:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:33.969 13:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:33.969 13:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:33.969 13:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:33.969 13:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:33.969 { 00:06:33.969 "subsystems": [ 00:06:33.969 { 00:06:33.969 "subsystem": "bdev", 00:06:33.969 "config": [ 00:06:33.969 { 00:06:33.969 "params": { 00:06:33.969 "trtype": "pcie", 00:06:33.969 "traddr": "0000:00:10.0", 00:06:33.969 "name": "Nvme0" 00:06:33.969 }, 00:06:33.969 "method": "bdev_nvme_attach_controller" 00:06:33.969 }, 00:06:33.969 { 00:06:33.969 "method": "bdev_wait_for_examine" 00:06:33.969 } 00:06:33.969 ] 00:06:33.969 } 00:06:33.969 ] 00:06:33.969 } 00:06:33.969 [2024-12-11 13:47:26.890418] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:33.970 [2024-12-11 13:47:26.890523] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61151 ] 00:06:34.228 [2024-12-11 13:47:27.037426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.228 [2024-12-11 13:47:27.099489] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.228 [2024-12-11 13:47:27.156287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.228  [2024-12-11T13:47:27.533Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:34.486 00:06:34.486 13:47:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:34.486 13:47:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:34.486 13:47:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:34.486 13:47:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:34.486 13:47:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:34.486 13:47:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:34.486 13:47:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:35.052 13:47:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:35.052 13:47:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:35.052 13:47:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:35.052 13:47:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:35.052 [2024-12-11 13:47:27.958621] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:35.052 [2024-12-11 13:47:27.958779] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61170 ] 00:06:35.052 { 00:06:35.052 "subsystems": [ 00:06:35.052 { 00:06:35.052 "subsystem": "bdev", 00:06:35.052 "config": [ 00:06:35.052 { 00:06:35.052 "params": { 00:06:35.052 "trtype": "pcie", 00:06:35.052 "traddr": "0000:00:10.0", 00:06:35.052 "name": "Nvme0" 00:06:35.052 }, 00:06:35.052 "method": "bdev_nvme_attach_controller" 00:06:35.052 }, 00:06:35.052 { 00:06:35.052 "method": "bdev_wait_for_examine" 00:06:35.052 } 00:06:35.052 ] 00:06:35.052 } 00:06:35.052 ] 00:06:35.052 } 00:06:35.311 [2024-12-11 13:47:28.103740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.311 [2024-12-11 13:47:28.163818] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.311 [2024-12-11 13:47:28.220906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.311  [2024-12-11T13:47:28.616Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:35.569 00:06:35.569 13:47:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:35.569 13:47:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:35.570 13:47:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:35.570 13:47:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:35.570 [2024-12-11 13:47:28.584814] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:35.570 [2024-12-11 13:47:28.585219] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61189 ] 00:06:35.570 { 00:06:35.570 "subsystems": [ 00:06:35.570 { 00:06:35.570 "subsystem": "bdev", 00:06:35.570 "config": [ 00:06:35.570 { 00:06:35.570 "params": { 00:06:35.570 "trtype": "pcie", 00:06:35.570 "traddr": "0000:00:10.0", 00:06:35.570 "name": "Nvme0" 00:06:35.570 }, 00:06:35.570 "method": "bdev_nvme_attach_controller" 00:06:35.570 }, 00:06:35.570 { 00:06:35.570 "method": "bdev_wait_for_examine" 00:06:35.570 } 00:06:35.570 ] 00:06:35.570 } 00:06:35.570 ] 00:06:35.570 } 00:06:35.829 [2024-12-11 13:47:28.733017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.829 [2024-12-11 13:47:28.808145] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.829 [2024-12-11 13:47:28.867655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.088  [2024-12-11T13:47:29.394Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:36.347 00:06:36.347 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:36.347 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:36.347 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:36.347 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:36.347 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:36.347 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:36.347 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:36.347 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:36.347 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:36.347 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:36.347 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:36.347 { 00:06:36.347 "subsystems": [ 00:06:36.347 { 00:06:36.347 "subsystem": "bdev", 00:06:36.347 "config": [ 00:06:36.347 { 00:06:36.347 "params": { 00:06:36.347 "trtype": "pcie", 00:06:36.347 "traddr": "0000:00:10.0", 00:06:36.347 "name": "Nvme0" 00:06:36.347 }, 00:06:36.347 "method": "bdev_nvme_attach_controller" 00:06:36.347 }, 00:06:36.347 { 00:06:36.347 "method": "bdev_wait_for_examine" 00:06:36.347 } 00:06:36.347 ] 00:06:36.347 } 00:06:36.347 ] 00:06:36.347 } 00:06:36.347 [2024-12-11 13:47:29.256440] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:36.347 [2024-12-11 13:47:29.256555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61199 ] 00:06:36.605 [2024-12-11 13:47:29.402188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.605 [2024-12-11 13:47:29.470151] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.605 [2024-12-11 13:47:29.527291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.605  [2024-12-11T13:47:29.911Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:36.864 00:06:36.864 00:06:36.864 real 0m14.988s 00:06:36.864 user 0m10.894s 00:06:36.864 sys 0m5.716s 00:06:36.864 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.864 ************************************ 00:06:36.864 END TEST dd_rw 00:06:36.864 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:36.864 ************************************ 00:06:36.864 13:47:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:36.864 13:47:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.864 13:47:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.864 13:47:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:36.864 ************************************ 00:06:36.864 START TEST dd_rw_offset 00:06:36.864 ************************************ 00:06:36.864 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:06:36.864 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:36.864 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:36.864 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:36.864 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:37.123 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:37.124 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=m8bofoeznewa7m6dj10eoevxq7ajw3t76zzem0iw3wvroox3kdpsbon6px7eqxqh3sdxej9vso4po0sclu2sghq3yhzfvazc8e98urypte993w3xlhe5shbih1dy8k91r3isyzahghaid2fjbwfyn7hewa6fzn7pld9uz4kcuonfrs0u4bf0ezakug5mm0h1l8wpzxngcjbkdocnuuofwcx1zt065omf08vmntlqaze6oibi7zzbb1s1oi9iqffv6rtxfv10k4vkg22oicvhmlxhz69y2avmsdx8ihof1f5dnr40f000xxspymor3gjge1w4h0ek5ngayx0zj0hot2mo8b1h8mkf55qbnht6vx8bille6kp9twvuztgoij0y6h86g1l96a9uuf2jmkby8pfgt409a7esmamvtn37al8qcqh94vmvghgbyv8glut1qzaceb5hkqt5erfnlostnkjugx4vtuu10z4v2qu0k2m2a0c7mlw9e2qt7pscgax5vtvrwwmd6lmyf4qkjhlttjf71a3ni603h7yu2i2ogrwztu73mnu8gfhdp75w0e8q1sdib66zgnwzew33ua84g6fy19irhd6um8fh0c5m0wxdzyba42g560ge50lihsqt7ks8ai55vonxktxg4mrskj3c99fwuh8y7l87v2x6pnjwkf9hvx7qbzyhm5tk5s828z051kqt0jwvqd831dy2xp7c9n0t1qoet3b6ta7eo2c7lmr712vmfzgmjvwzx407dih926fzn0qrg0ph9fjt3chx4rxety4stgntdqk5v7c1dzy5p1673c1iqnzyr0srv5gbgv3q0p5ndjbsehbp58t1ranbcxestcimtp35phye5b3sw2d9fznwz08gki79dhqzpqgyt9u2nlk9to482h68u7k0feo41e5vz9v0g38anvvyxzg3lqldzwu2dh5mg9so3eufw7s8rafwwvg4gi60sxymxrkhvsohy27l8qm5hga68ho18gfokukjng0a4pa9xn4fs12ffue0mqako4vkhm0t3v61hi4gg9u6m1ynecrws1920t1hgupk0ivssa7xfrx53zxzqpu88sa8hil4yfphph684dye6e6vg3jidjb5lmyzq08kg7bqx2r0pstsle7mnbzp8strpmm9f1xwcqhubcrr7jbttzqgifk1xx4g6eizr233u8grknkiyyapipn4lf22g6r643dra40l6m0cch3v3rq0rejk0xsiuobtndi2k371ci25pezutgqdqvjgtx43jlorg8qiue4s32b7rrj4pfh0p0i2dyyam7hzqkc5pnwm2w9430lau7zpvu34hpahivxdxqcccbsi9ntj5vxqxkapub4pm1toxz619o1sat0fh04b7aynidosi60p2ml8qy4jn1pdsbaubilzov648elhdsg8mp9vet1nr5qxzrhl4w9b1gjokafjsmllwudh2wo224m4fddqf2ak6p8tj5gv538nq39h6swpeksksksi4phgjporn95529dpoi8cgot4m4denk5wk0b0292nba337nsmkhv8odrywf3pfs8h3vupw2dw5rspfi89vb4fjr711pf1nywidqknah1lbsqrb8eewg5cvl89fhwulc2avdzwas10zb33nm2vlijbu4nzm2mnib3vaidi52a35pnvhsdx3pq8d4738taimml62mqur40j2cd43uzf9jseim8z4iumidky07m7sfhqk4z6bhcci04mxmkmjyhnd04vse1g83xf5na3jl4bq4ltaqk0vrcfpwwl4gwpez03l2ug0pk9ub7zt6qnowsn7a4jlkrvskxi51r7kzqsw8pnchag8plx7pu0ndrj3m7dd1u2gmcx7uy24fm34rl2tiz7tz957ml03yyk930r3hjcofdvj1jk27qqnztml41de1tj2rmrceqww0zrfs80r4ovvdipm2fszzy34qsyzzcxwvvcs8y52ih15wu4hzrr2qgvglmm2iib98t3uwumeimedzo1kl3f3jhagte9dioxweuv7ujl6z9r9qegm0bsqogdoc02ltx9xqvroalc8vbyu0jspwtll4qtx50xdy0amdhock675vshrvqpua8b3voe9ewr6r3o816qaxuhaxyoc7048wr85vlu3mtnlvdm9j2e1atvkj7lokahyu33db3quu7h0b2h4yspztswu9iwzxbgad7hjcf8j8rond7vu0lrx4qb90fnuz3fj736u7np1kaasweufpmolbneolry2f47if3wu954s50xgrvz3o5n6b0t40lthlepddogvxhbk8jp1os8qhdtcn1tdde0fwfqptq7vgzd1hicz5ou8912maapwukh452mc75s364j120jkmg7jlzm63n9a9yhfdb5h71czkd5fgbmr2bepuju32b4elvh9ka6hauf3eop5gvd8mu215f44n55ry1x8r0rxun4r3rrhj56zjhgbg5tvylzw5d6po25he66frlw47h2vh7kik7lrgeu8nt0wg0bs94b3hs0tnqrmaq31fb65399roxic6d3rz9ftm1jwrckhruisrvilko1kxvjtwjcj1uocfke18140et77iyfd68e7crwspr337qllzv6c8ha70n6zko8hon0180t6b6f6nhqv9q1qz01m33nv1maa953wk4anzv7m2vx5zx7zfn4xmeh5pig539hp0ucbalw9c6gqls1tgcjydc9ayhww8kjg8v2y93fx8nhoc9z9qh8mpgh6h494zt3dt43o2m0816o1tdml7u5pcqaubp3wuvc0xqh9gvvc7mk8ftfkmjynjmrzkwny489757mr9aext0xv3v1zvba4grf4jm0zuxlpphh555xr589361e3ylisrmp4nfl06s91gnmhmhkazzyf5xocx2j8ljnr1iksq2wgwuuvl86f29hyxxwbewtonk308ttzwj23jpuao6bca907ik84ctk579yjjnnu8npuk0u0r9sv79kopnfgabn8z5t8ngo2dkykjrpu06p7f1bkhrxxrwl23mdgr4didvs3r2ag4pz2qqsm35ph99swrnbwd2l4pfueqrbg79g8whhc9swluelq7nsl9284kqtcvo2hxdwmz7paschqfqsnpbeenp5rdh75w8znb5wqths9mu3vfy0lx036hq60ub4ivytqxmo4eev49peh8brvdjiopr06gko63sk9h1qqd0wenfro0j8x8nfu088fim92ko9wfgetoaacbvytc2dwd1wcyzcl46l5z6u61a32398a00v4kpvevamdwnku0fpp5lwt94y2muav4ve513g2u293oga99uw6zy42020qfw2m7dtrgal4vec8ubpgr7rptewt05fz1ce3hqseyuwnr8k8lxj2sr05dz1cgbsvyqf704a0hkr9fcsijwolpv0ny485oczq1nr25zxqz30uhutokyfv2seeoj1grgq5pg1xcvia2synu9kcl2jklctm12byqvhjub4kngyjtk2a8eeh8546gigjgz1ypyiznyt0wo71fot3f9r2unmfy4vf54cxw2kewl4ymp3ktycodrqguguwijs52zhwodubs4rw0rsyjen5m78kxapr9oekscqce0v6uqv0qtbvlc59wnjojr0tlksrr6d4c22e7vd8s0kwecelcbwmkci21ylj6je8x9k1cv7wcgno94v9rqimd15zja9avosc7a2a0uqxbechnppi8q9tbnsmi3814fux7nnj85ehnezqtgi40aw99d3hqc20uermqevqkrsoejerolaxrdladhujm0ngheoys8f0h5vr6lmv2kv2fjqf7ihtyahh9pf70sn4hcuay1kd9ue9rj4gm733kjsw7qemiirborr0darfzwo2lmjd6lsymw6ccgthf9r4ko0nnbajt0ao4ffat85qdhad4y9x1bzym5bkyb0hy9zy2l9kx1215wkr4ywhprgv7ec01v8rrc4k1uaumpzlt5xxda4gup0xojvryqyl7vniwytrvlivzvt3ggwyxc1hqv6 00:06:37.124 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:37.124 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:37.124 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:37.124 13:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:37.124 [2024-12-11 13:47:29.988501] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:37.124 [2024-12-11 13:47:29.988621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61237 ] 00:06:37.124 { 00:06:37.124 "subsystems": [ 00:06:37.124 { 00:06:37.124 "subsystem": "bdev", 00:06:37.124 "config": [ 00:06:37.124 { 00:06:37.124 "params": { 00:06:37.124 "trtype": "pcie", 00:06:37.124 "traddr": "0000:00:10.0", 00:06:37.124 "name": "Nvme0" 00:06:37.124 }, 00:06:37.124 "method": "bdev_nvme_attach_controller" 00:06:37.124 }, 00:06:37.124 { 00:06:37.124 "method": "bdev_wait_for_examine" 00:06:37.124 } 00:06:37.124 ] 00:06:37.124 } 00:06:37.124 ] 00:06:37.124 } 00:06:37.124 [2024-12-11 13:47:30.136687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.382 [2024-12-11 13:47:30.201867] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.382 [2024-12-11 13:47:30.257918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.382  [2024-12-11T13:47:30.688Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:37.641 00:06:37.641 13:47:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:37.641 13:47:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:37.641 13:47:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:37.641 13:47:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:37.641 { 00:06:37.641 "subsystems": [ 00:06:37.641 { 00:06:37.641 "subsystem": "bdev", 00:06:37.641 "config": [ 00:06:37.641 { 00:06:37.641 "params": { 00:06:37.641 "trtype": "pcie", 00:06:37.641 "traddr": "0000:00:10.0", 00:06:37.641 "name": "Nvme0" 00:06:37.641 }, 00:06:37.641 "method": "bdev_nvme_attach_controller" 00:06:37.641 }, 00:06:37.641 { 00:06:37.641 "method": "bdev_wait_for_examine" 00:06:37.641 } 00:06:37.641 ] 00:06:37.641 } 00:06:37.641 ] 00:06:37.641 } 00:06:37.641 [2024-12-11 13:47:30.626388] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:37.641 [2024-12-11 13:47:30.626488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61256 ] 00:06:37.900 [2024-12-11 13:47:30.773350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.900 [2024-12-11 13:47:30.835342] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.900 [2024-12-11 13:47:30.889630] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.172  [2024-12-11T13:47:31.219Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:38.172 00:06:38.172 13:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:38.172 13:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ m8bofoeznewa7m6dj10eoevxq7ajw3t76zzem0iw3wvroox3kdpsbon6px7eqxqh3sdxej9vso4po0sclu2sghq3yhzfvazc8e98urypte993w3xlhe5shbih1dy8k91r3isyzahghaid2fjbwfyn7hewa6fzn7pld9uz4kcuonfrs0u4bf0ezakug5mm0h1l8wpzxngcjbkdocnuuofwcx1zt065omf08vmntlqaze6oibi7zzbb1s1oi9iqffv6rtxfv10k4vkg22oicvhmlxhz69y2avmsdx8ihof1f5dnr40f000xxspymor3gjge1w4h0ek5ngayx0zj0hot2mo8b1h8mkf55qbnht6vx8bille6kp9twvuztgoij0y6h86g1l96a9uuf2jmkby8pfgt409a7esmamvtn37al8qcqh94vmvghgbyv8glut1qzaceb5hkqt5erfnlostnkjugx4vtuu10z4v2qu0k2m2a0c7mlw9e2qt7pscgax5vtvrwwmd6lmyf4qkjhlttjf71a3ni603h7yu2i2ogrwztu73mnu8gfhdp75w0e8q1sdib66zgnwzew33ua84g6fy19irhd6um8fh0c5m0wxdzyba42g560ge50lihsqt7ks8ai55vonxktxg4mrskj3c99fwuh8y7l87v2x6pnjwkf9hvx7qbzyhm5tk5s828z051kqt0jwvqd831dy2xp7c9n0t1qoet3b6ta7eo2c7lmr712vmfzgmjvwzx407dih926fzn0qrg0ph9fjt3chx4rxety4stgntdqk5v7c1dzy5p1673c1iqnzyr0srv5gbgv3q0p5ndjbsehbp58t1ranbcxestcimtp35phye5b3sw2d9fznwz08gki79dhqzpqgyt9u2nlk9to482h68u7k0feo41e5vz9v0g38anvvyxzg3lqldzwu2dh5mg9so3eufw7s8rafwwvg4gi60sxymxrkhvsohy27l8qm5hga68ho18gfokukjng0a4pa9xn4fs12ffue0mqako4vkhm0t3v61hi4gg9u6m1ynecrws1920t1hgupk0ivssa7xfrx53zxzqpu88sa8hil4yfphph684dye6e6vg3jidjb5lmyzq08kg7bqx2r0pstsle7mnbzp8strpmm9f1xwcqhubcrr7jbttzqgifk1xx4g6eizr233u8grknkiyyapipn4lf22g6r643dra40l6m0cch3v3rq0rejk0xsiuobtndi2k371ci25pezutgqdqvjgtx43jlorg8qiue4s32b7rrj4pfh0p0i2dyyam7hzqkc5pnwm2w9430lau7zpvu34hpahivxdxqcccbsi9ntj5vxqxkapub4pm1toxz619o1sat0fh04b7aynidosi60p2ml8qy4jn1pdsbaubilzov648elhdsg8mp9vet1nr5qxzrhl4w9b1gjokafjsmllwudh2wo224m4fddqf2ak6p8tj5gv538nq39h6swpeksksksi4phgjporn95529dpoi8cgot4m4denk5wk0b0292nba337nsmkhv8odrywf3pfs8h3vupw2dw5rspfi89vb4fjr711pf1nywidqknah1lbsqrb8eewg5cvl89fhwulc2avdzwas10zb33nm2vlijbu4nzm2mnib3vaidi52a35pnvhsdx3pq8d4738taimml62mqur40j2cd43uzf9jseim8z4iumidky07m7sfhqk4z6bhcci04mxmkmjyhnd04vse1g83xf5na3jl4bq4ltaqk0vrcfpwwl4gwpez03l2ug0pk9ub7zt6qnowsn7a4jlkrvskxi51r7kzqsw8pnchag8plx7pu0ndrj3m7dd1u2gmcx7uy24fm34rl2tiz7tz957ml03yyk930r3hjcofdvj1jk27qqnztml41de1tj2rmrceqww0zrfs80r4ovvdipm2fszzy34qsyzzcxwvvcs8y52ih15wu4hzrr2qgvglmm2iib98t3uwumeimedzo1kl3f3jhagte9dioxweuv7ujl6z9r9qegm0bsqogdoc02ltx9xqvroalc8vbyu0jspwtll4qtx50xdy0amdhock675vshrvqpua8b3voe9ewr6r3o816qaxuhaxyoc7048wr85vlu3mtnlvdm9j2e1atvkj7lokahyu33db3quu7h0b2h4yspztswu9iwzxbgad7hjcf8j8rond7vu0lrx4qb90fnuz3fj736u7np1kaasweufpmolbneolry2f47if3wu954s50xgrvz3o5n6b0t40lthlepddogvxhbk8jp1os8qhdtcn1tdde0fwfqptq7vgzd1hicz5ou8912maapwukh452mc75s364j120jkmg7jlzm63n9a9yhfdb5h71czkd5fgbmr2bepuju32b4elvh9ka6hauf3eop5gvd8mu215f44n55ry1x8r0rxun4r3rrhj56zjhgbg5tvylzw5d6po25he66frlw47h2vh7kik7lrgeu8nt0wg0bs94b3hs0tnqrmaq31fb65399roxic6d3rz9ftm1jwrckhruisrvilko1kxvjtwjcj1uocfke18140et77iyfd68e7crwspr337qllzv6c8ha70n6zko8hon0180t6b6f6nhqv9q1qz01m33nv1maa953wk4anzv7m2vx5zx7zfn4xmeh5pig539hp0ucbalw9c6gqls1tgcjydc9ayhww8kjg8v2y93fx8nhoc9z9qh8mpgh6h494zt3dt43o2m0816o1tdml7u5pcqaubp3wuvc0xqh9gvvc7mk8ftfkmjynjmrzkwny489757mr9aext0xv3v1zvba4grf4jm0zuxlpphh555xr589361e3ylisrmp4nfl06s91gnmhmhkazzyf5xocx2j8ljnr1iksq2wgwuuvl86f29hyxxwbewtonk308ttzwj23jpuao6bca907ik84ctk579yjjnnu8npuk0u0r9sv79kopnfgabn8z5t8ngo2dkykjrpu06p7f1bkhrxxrwl23mdgr4didvs3r2ag4pz2qqsm35ph99swrnbwd2l4pfueqrbg79g8whhc9swluelq7nsl9284kqtcvo2hxdwmz7paschqfqsnpbeenp5rdh75w8znb5wqths9mu3vfy0lx036hq60ub4ivytqxmo4eev49peh8brvdjiopr06gko63sk9h1qqd0wenfro0j8x8nfu088fim92ko9wfgetoaacbvytc2dwd1wcyzcl46l5z6u61a32398a00v4kpvevamdwnku0fpp5lwt94y2muav4ve513g2u293oga99uw6zy42020qfw2m7dtrgal4vec8ubpgr7rptewt05fz1ce3hqseyuwnr8k8lxj2sr05dz1cgbsvyqf704a0hkr9fcsijwolpv0ny485oczq1nr25zxqz30uhutokyfv2seeoj1grgq5pg1xcvia2synu9kcl2jklctm12byqvhjub4kngyjtk2a8eeh8546gigjgz1ypyiznyt0wo71fot3f9r2unmfy4vf54cxw2kewl4ymp3ktycodrqguguwijs52zhwodubs4rw0rsyjen5m78kxapr9oekscqce0v6uqv0qtbvlc59wnjojr0tlksrr6d4c22e7vd8s0kwecelcbwmkci21ylj6je8x9k1cv7wcgno94v9rqimd15zja9avosc7a2a0uqxbechnppi8q9tbnsmi3814fux7nnj85ehnezqtgi40aw99d3hqc20uermqevqkrsoejerolaxrdladhujm0ngheoys8f0h5vr6lmv2kv2fjqf7ihtyahh9pf70sn4hcuay1kd9ue9rj4gm733kjsw7qemiirborr0darfzwo2lmjd6lsymw6ccgthf9r4ko0nnbajt0ao4ffat85qdhad4y9x1bzym5bkyb0hy9zy2l9kx1215wkr4ywhprgv7ec01v8rrc4k1uaumpzlt5xxda4gup0xojvryqyl7vniwytrvlivzvt3ggwyxc1hqv6 == \m\8\b\o\f\o\e\z\n\e\w\a\7\m\6\d\j\1\0\e\o\e\v\x\q\7\a\j\w\3\t\7\6\z\z\e\m\0\i\w\3\w\v\r\o\o\x\3\k\d\p\s\b\o\n\6\p\x\7\e\q\x\q\h\3\s\d\x\e\j\9\v\s\o\4\p\o\0\s\c\l\u\2\s\g\h\q\3\y\h\z\f\v\a\z\c\8\e\9\8\u\r\y\p\t\e\9\9\3\w\3\x\l\h\e\5\s\h\b\i\h\1\d\y\8\k\9\1\r\3\i\s\y\z\a\h\g\h\a\i\d\2\f\j\b\w\f\y\n\7\h\e\w\a\6\f\z\n\7\p\l\d\9\u\z\4\k\c\u\o\n\f\r\s\0\u\4\b\f\0\e\z\a\k\u\g\5\m\m\0\h\1\l\8\w\p\z\x\n\g\c\j\b\k\d\o\c\n\u\u\o\f\w\c\x\1\z\t\0\6\5\o\m\f\0\8\v\m\n\t\l\q\a\z\e\6\o\i\b\i\7\z\z\b\b\1\s\1\o\i\9\i\q\f\f\v\6\r\t\x\f\v\1\0\k\4\v\k\g\2\2\o\i\c\v\h\m\l\x\h\z\6\9\y\2\a\v\m\s\d\x\8\i\h\o\f\1\f\5\d\n\r\4\0\f\0\0\0\x\x\s\p\y\m\o\r\3\g\j\g\e\1\w\4\h\0\e\k\5\n\g\a\y\x\0\z\j\0\h\o\t\2\m\o\8\b\1\h\8\m\k\f\5\5\q\b\n\h\t\6\v\x\8\b\i\l\l\e\6\k\p\9\t\w\v\u\z\t\g\o\i\j\0\y\6\h\8\6\g\1\l\9\6\a\9\u\u\f\2\j\m\k\b\y\8\p\f\g\t\4\0\9\a\7\e\s\m\a\m\v\t\n\3\7\a\l\8\q\c\q\h\9\4\v\m\v\g\h\g\b\y\v\8\g\l\u\t\1\q\z\a\c\e\b\5\h\k\q\t\5\e\r\f\n\l\o\s\t\n\k\j\u\g\x\4\v\t\u\u\1\0\z\4\v\2\q\u\0\k\2\m\2\a\0\c\7\m\l\w\9\e\2\q\t\7\p\s\c\g\a\x\5\v\t\v\r\w\w\m\d\6\l\m\y\f\4\q\k\j\h\l\t\t\j\f\7\1\a\3\n\i\6\0\3\h\7\y\u\2\i\2\o\g\r\w\z\t\u\7\3\m\n\u\8\g\f\h\d\p\7\5\w\0\e\8\q\1\s\d\i\b\6\6\z\g\n\w\z\e\w\3\3\u\a\8\4\g\6\f\y\1\9\i\r\h\d\6\u\m\8\f\h\0\c\5\m\0\w\x\d\z\y\b\a\4\2\g\5\6\0\g\e\5\0\l\i\h\s\q\t\7\k\s\8\a\i\5\5\v\o\n\x\k\t\x\g\4\m\r\s\k\j\3\c\9\9\f\w\u\h\8\y\7\l\8\7\v\2\x\6\p\n\j\w\k\f\9\h\v\x\7\q\b\z\y\h\m\5\t\k\5\s\8\2\8\z\0\5\1\k\q\t\0\j\w\v\q\d\8\3\1\d\y\2\x\p\7\c\9\n\0\t\1\q\o\e\t\3\b\6\t\a\7\e\o\2\c\7\l\m\r\7\1\2\v\m\f\z\g\m\j\v\w\z\x\4\0\7\d\i\h\9\2\6\f\z\n\0\q\r\g\0\p\h\9\f\j\t\3\c\h\x\4\r\x\e\t\y\4\s\t\g\n\t\d\q\k\5\v\7\c\1\d\z\y\5\p\1\6\7\3\c\1\i\q\n\z\y\r\0\s\r\v\5\g\b\g\v\3\q\0\p\5\n\d\j\b\s\e\h\b\p\5\8\t\1\r\a\n\b\c\x\e\s\t\c\i\m\t\p\3\5\p\h\y\e\5\b\3\s\w\2\d\9\f\z\n\w\z\0\8\g\k\i\7\9\d\h\q\z\p\q\g\y\t\9\u\2\n\l\k\9\t\o\4\8\2\h\6\8\u\7\k\0\f\e\o\4\1\e\5\v\z\9\v\0\g\3\8\a\n\v\v\y\x\z\g\3\l\q\l\d\z\w\u\2\d\h\5\m\g\9\s\o\3\e\u\f\w\7\s\8\r\a\f\w\w\v\g\4\g\i\6\0\s\x\y\m\x\r\k\h\v\s\o\h\y\2\7\l\8\q\m\5\h\g\a\6\8\h\o\1\8\g\f\o\k\u\k\j\n\g\0\a\4\p\a\9\x\n\4\f\s\1\2\f\f\u\e\0\m\q\a\k\o\4\v\k\h\m\0\t\3\v\6\1\h\i\4\g\g\9\u\6\m\1\y\n\e\c\r\w\s\1\9\2\0\t\1\h\g\u\p\k\0\i\v\s\s\a\7\x\f\r\x\5\3\z\x\z\q\p\u\8\8\s\a\8\h\i\l\4\y\f\p\h\p\h\6\8\4\d\y\e\6\e\6\v\g\3\j\i\d\j\b\5\l\m\y\z\q\0\8\k\g\7\b\q\x\2\r\0\p\s\t\s\l\e\7\m\n\b\z\p\8\s\t\r\p\m\m\9\f\1\x\w\c\q\h\u\b\c\r\r\7\j\b\t\t\z\q\g\i\f\k\1\x\x\4\g\6\e\i\z\r\2\3\3\u\8\g\r\k\n\k\i\y\y\a\p\i\p\n\4\l\f\2\2\g\6\r\6\4\3\d\r\a\4\0\l\6\m\0\c\c\h\3\v\3\r\q\0\r\e\j\k\0\x\s\i\u\o\b\t\n\d\i\2\k\3\7\1\c\i\2\5\p\e\z\u\t\g\q\d\q\v\j\g\t\x\4\3\j\l\o\r\g\8\q\i\u\e\4\s\3\2\b\7\r\r\j\4\p\f\h\0\p\0\i\2\d\y\y\a\m\7\h\z\q\k\c\5\p\n\w\m\2\w\9\4\3\0\l\a\u\7\z\p\v\u\3\4\h\p\a\h\i\v\x\d\x\q\c\c\c\b\s\i\9\n\t\j\5\v\x\q\x\k\a\p\u\b\4\p\m\1\t\o\x\z\6\1\9\o\1\s\a\t\0\f\h\0\4\b\7\a\y\n\i\d\o\s\i\6\0\p\2\m\l\8\q\y\4\j\n\1\p\d\s\b\a\u\b\i\l\z\o\v\6\4\8\e\l\h\d\s\g\8\m\p\9\v\e\t\1\n\r\5\q\x\z\r\h\l\4\w\9\b\1\g\j\o\k\a\f\j\s\m\l\l\w\u\d\h\2\w\o\2\2\4\m\4\f\d\d\q\f\2\a\k\6\p\8\t\j\5\g\v\5\3\8\n\q\3\9\h\6\s\w\p\e\k\s\k\s\k\s\i\4\p\h\g\j\p\o\r\n\9\5\5\2\9\d\p\o\i\8\c\g\o\t\4\m\4\d\e\n\k\5\w\k\0\b\0\2\9\2\n\b\a\3\3\7\n\s\m\k\h\v\8\o\d\r\y\w\f\3\p\f\s\8\h\3\v\u\p\w\2\d\w\5\r\s\p\f\i\8\9\v\b\4\f\j\r\7\1\1\p\f\1\n\y\w\i\d\q\k\n\a\h\1\l\b\s\q\r\b\8\e\e\w\g\5\c\v\l\8\9\f\h\w\u\l\c\2\a\v\d\z\w\a\s\1\0\z\b\3\3\n\m\2\v\l\i\j\b\u\4\n\z\m\2\m\n\i\b\3\v\a\i\d\i\5\2\a\3\5\p\n\v\h\s\d\x\3\p\q\8\d\4\7\3\8\t\a\i\m\m\l\6\2\m\q\u\r\4\0\j\2\c\d\4\3\u\z\f\9\j\s\e\i\m\8\z\4\i\u\m\i\d\k\y\0\7\m\7\s\f\h\q\k\4\z\6\b\h\c\c\i\0\4\m\x\m\k\m\j\y\h\n\d\0\4\v\s\e\1\g\8\3\x\f\5\n\a\3\j\l\4\b\q\4\l\t\a\q\k\0\v\r\c\f\p\w\w\l\4\g\w\p\e\z\0\3\l\2\u\g\0\p\k\9\u\b\7\z\t\6\q\n\o\w\s\n\7\a\4\j\l\k\r\v\s\k\x\i\5\1\r\7\k\z\q\s\w\8\p\n\c\h\a\g\8\p\l\x\7\p\u\0\n\d\r\j\3\m\7\d\d\1\u\2\g\m\c\x\7\u\y\2\4\f\m\3\4\r\l\2\t\i\z\7\t\z\9\5\7\m\l\0\3\y\y\k\9\3\0\r\3\h\j\c\o\f\d\v\j\1\j\k\2\7\q\q\n\z\t\m\l\4\1\d\e\1\t\j\2\r\m\r\c\e\q\w\w\0\z\r\f\s\8\0\r\4\o\v\v\d\i\p\m\2\f\s\z\z\y\3\4\q\s\y\z\z\c\x\w\v\v\c\s\8\y\5\2\i\h\1\5\w\u\4\h\z\r\r\2\q\g\v\g\l\m\m\2\i\i\b\9\8\t\3\u\w\u\m\e\i\m\e\d\z\o\1\k\l\3\f\3\j\h\a\g\t\e\9\d\i\o\x\w\e\u\v\7\u\j\l\6\z\9\r\9\q\e\g\m\0\b\s\q\o\g\d\o\c\0\2\l\t\x\9\x\q\v\r\o\a\l\c\8\v\b\y\u\0\j\s\p\w\t\l\l\4\q\t\x\5\0\x\d\y\0\a\m\d\h\o\c\k\6\7\5\v\s\h\r\v\q\p\u\a\8\b\3\v\o\e\9\e\w\r\6\r\3\o\8\1\6\q\a\x\u\h\a\x\y\o\c\7\0\4\8\w\r\8\5\v\l\u\3\m\t\n\l\v\d\m\9\j\2\e\1\a\t\v\k\j\7\l\o\k\a\h\y\u\3\3\d\b\3\q\u\u\7\h\0\b\2\h\4\y\s\p\z\t\s\w\u\9\i\w\z\x\b\g\a\d\7\h\j\c\f\8\j\8\r\o\n\d\7\v\u\0\l\r\x\4\q\b\9\0\f\n\u\z\3\f\j\7\3\6\u\7\n\p\1\k\a\a\s\w\e\u\f\p\m\o\l\b\n\e\o\l\r\y\2\f\4\7\i\f\3\w\u\9\5\4\s\5\0\x\g\r\v\z\3\o\5\n\6\b\0\t\4\0\l\t\h\l\e\p\d\d\o\g\v\x\h\b\k\8\j\p\1\o\s\8\q\h\d\t\c\n\1\t\d\d\e\0\f\w\f\q\p\t\q\7\v\g\z\d\1\h\i\c\z\5\o\u\8\9\1\2\m\a\a\p\w\u\k\h\4\5\2\m\c\7\5\s\3\6\4\j\1\2\0\j\k\m\g\7\j\l\z\m\6\3\n\9\a\9\y\h\f\d\b\5\h\7\1\c\z\k\d\5\f\g\b\m\r\2\b\e\p\u\j\u\3\2\b\4\e\l\v\h\9\k\a\6\h\a\u\f\3\e\o\p\5\g\v\d\8\m\u\2\1\5\f\4\4\n\5\5\r\y\1\x\8\r\0\r\x\u\n\4\r\3\r\r\h\j\5\6\z\j\h\g\b\g\5\t\v\y\l\z\w\5\d\6\p\o\2\5\h\e\6\6\f\r\l\w\4\7\h\2\v\h\7\k\i\k\7\l\r\g\e\u\8\n\t\0\w\g\0\b\s\9\4\b\3\h\s\0\t\n\q\r\m\a\q\3\1\f\b\6\5\3\9\9\r\o\x\i\c\6\d\3\r\z\9\f\t\m\1\j\w\r\c\k\h\r\u\i\s\r\v\i\l\k\o\1\k\x\v\j\t\w\j\c\j\1\u\o\c\f\k\e\1\8\1\4\0\e\t\7\7\i\y\f\d\6\8\e\7\c\r\w\s\p\r\3\3\7\q\l\l\z\v\6\c\8\h\a\7\0\n\6\z\k\o\8\h\o\n\0\1\8\0\t\6\b\6\f\6\n\h\q\v\9\q\1\q\z\0\1\m\3\3\n\v\1\m\a\a\9\5\3\w\k\4\a\n\z\v\7\m\2\v\x\5\z\x\7\z\f\n\4\x\m\e\h\5\p\i\g\5\3\9\h\p\0\u\c\b\a\l\w\9\c\6\g\q\l\s\1\t\g\c\j\y\d\c\9\a\y\h\w\w\8\k\j\g\8\v\2\y\9\3\f\x\8\n\h\o\c\9\z\9\q\h\8\m\p\g\h\6\h\4\9\4\z\t\3\d\t\4\3\o\2\m\0\8\1\6\o\1\t\d\m\l\7\u\5\p\c\q\a\u\b\p\3\w\u\v\c\0\x\q\h\9\g\v\v\c\7\m\k\8\f\t\f\k\m\j\y\n\j\m\r\z\k\w\n\y\4\8\9\7\5\7\m\r\9\a\e\x\t\0\x\v\3\v\1\z\v\b\a\4\g\r\f\4\j\m\0\z\u\x\l\p\p\h\h\5\5\5\x\r\5\8\9\3\6\1\e\3\y\l\i\s\r\m\p\4\n\f\l\0\6\s\9\1\g\n\m\h\m\h\k\a\z\z\y\f\5\x\o\c\x\2\j\8\l\j\n\r\1\i\k\s\q\2\w\g\w\u\u\v\l\8\6\f\2\9\h\y\x\x\w\b\e\w\t\o\n\k\3\0\8\t\t\z\w\j\2\3\j\p\u\a\o\6\b\c\a\9\0\7\i\k\8\4\c\t\k\5\7\9\y\j\j\n\n\u\8\n\p\u\k\0\u\0\r\9\s\v\7\9\k\o\p\n\f\g\a\b\n\8\z\5\t\8\n\g\o\2\d\k\y\k\j\r\p\u\0\6\p\7\f\1\b\k\h\r\x\x\r\w\l\2\3\m\d\g\r\4\d\i\d\v\s\3\r\2\a\g\4\p\z\2\q\q\s\m\3\5\p\h\9\9\s\w\r\n\b\w\d\2\l\4\p\f\u\e\q\r\b\g\7\9\g\8\w\h\h\c\9\s\w\l\u\e\l\q\7\n\s\l\9\2\8\4\k\q\t\c\v\o\2\h\x\d\w\m\z\7\p\a\s\c\h\q\f\q\s\n\p\b\e\e\n\p\5\r\d\h\7\5\w\8\z\n\b\5\w\q\t\h\s\9\m\u\3\v\f\y\0\l\x\0\3\6\h\q\6\0\u\b\4\i\v\y\t\q\x\m\o\4\e\e\v\4\9\p\e\h\8\b\r\v\d\j\i\o\p\r\0\6\g\k\o\6\3\s\k\9\h\1\q\q\d\0\w\e\n\f\r\o\0\j\8\x\8\n\f\u\0\8\8\f\i\m\9\2\k\o\9\w\f\g\e\t\o\a\a\c\b\v\y\t\c\2\d\w\d\1\w\c\y\z\c\l\4\6\l\5\z\6\u\6\1\a\3\2\3\9\8\a\0\0\v\4\k\p\v\e\v\a\m\d\w\n\k\u\0\f\p\p\5\l\w\t\9\4\y\2\m\u\a\v\4\v\e\5\1\3\g\2\u\2\9\3\o\g\a\9\9\u\w\6\z\y\4\2\0\2\0\q\f\w\2\m\7\d\t\r\g\a\l\4\v\e\c\8\u\b\p\g\r\7\r\p\t\e\w\t\0\5\f\z\1\c\e\3\h\q\s\e\y\u\w\n\r\8\k\8\l\x\j\2\s\r\0\5\d\z\1\c\g\b\s\v\y\q\f\7\0\4\a\0\h\k\r\9\f\c\s\i\j\w\o\l\p\v\0\n\y\4\8\5\o\c\z\q\1\n\r\2\5\z\x\q\z\3\0\u\h\u\t\o\k\y\f\v\2\s\e\e\o\j\1\g\r\g\q\5\p\g\1\x\c\v\i\a\2\s\y\n\u\9\k\c\l\2\j\k\l\c\t\m\1\2\b\y\q\v\h\j\u\b\4\k\n\g\y\j\t\k\2\a\8\e\e\h\8\5\4\6\g\i\g\j\g\z\1\y\p\y\i\z\n\y\t\0\w\o\7\1\f\o\t\3\f\9\r\2\u\n\m\f\y\4\v\f\5\4\c\x\w\2\k\e\w\l\4\y\m\p\3\k\t\y\c\o\d\r\q\g\u\g\u\w\i\j\s\5\2\z\h\w\o\d\u\b\s\4\r\w\0\r\s\y\j\e\n\5\m\7\8\k\x\a\p\r\9\o\e\k\s\c\q\c\e\0\v\6\u\q\v\0\q\t\b\v\l\c\5\9\w\n\j\o\j\r\0\t\l\k\s\r\r\6\d\4\c\2\2\e\7\v\d\8\s\0\k\w\e\c\e\l\c\b\w\m\k\c\i\2\1\y\l\j\6\j\e\8\x\9\k\1\c\v\7\w\c\g\n\o\9\4\v\9\r\q\i\m\d\1\5\z\j\a\9\a\v\o\s\c\7\a\2\a\0\u\q\x\b\e\c\h\n\p\p\i\8\q\9\t\b\n\s\m\i\3\8\1\4\f\u\x\7\n\n\j\8\5\e\h\n\e\z\q\t\g\i\4\0\a\w\9\9\d\3\h\q\c\2\0\u\e\r\m\q\e\v\q\k\r\s\o\e\j\e\r\o\l\a\x\r\d\l\a\d\h\u\j\m\0\n\g\h\e\o\y\s\8\f\0\h\5\v\r\6\l\m\v\2\k\v\2\f\j\q\f\7\i\h\t\y\a\h\h\9\p\f\7\0\s\n\4\h\c\u\a\y\1\k\d\9\u\e\9\r\j\4\g\m\7\3\3\k\j\s\w\7\q\e\m\i\i\r\b\o\r\r\0\d\a\r\f\z\w\o\2\l\m\j\d\6\l\s\y\m\w\6\c\c\g\t\h\f\9\r\4\k\o\0\n\n\b\a\j\t\0\a\o\4\f\f\a\t\8\5\q\d\h\a\d\4\y\9\x\1\b\z\y\m\5\b\k\y\b\0\h\y\9\z\y\2\l\9\k\x\1\2\1\5\w\k\r\4\y\w\h\p\r\g\v\7\e\c\0\1\v\8\r\r\c\4\k\1\u\a\u\m\p\z\l\t\5\x\x\d\a\4\g\u\p\0\x\o\j\v\r\y\q\y\l\7\v\n\i\w\y\t\r\v\l\i\v\z\v\t\3\g\g\w\y\x\c\1\h\q\v\6 ]] 00:06:38.172 ************************************ 00:06:38.172 END TEST dd_rw_offset 00:06:38.172 ************************************ 00:06:38.172 00:06:38.172 real 0m1.306s 00:06:38.172 user 0m0.877s 00:06:38.172 sys 0m0.614s 00:06:38.172 13:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.172 13:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:38.438 13:47:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:38.438 13:47:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:38.438 13:47:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:38.438 13:47:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:38.438 13:47:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:38.438 13:47:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:38.438 13:47:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:38.438 13:47:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:38.438 13:47:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:38.438 13:47:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:38.438 13:47:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.438 { 00:06:38.438 "subsystems": [ 00:06:38.438 { 00:06:38.438 "subsystem": "bdev", 00:06:38.438 "config": [ 00:06:38.438 { 00:06:38.438 "params": { 00:06:38.438 "trtype": "pcie", 00:06:38.438 "traddr": "0000:00:10.0", 00:06:38.438 "name": "Nvme0" 00:06:38.438 }, 00:06:38.438 "method": "bdev_nvme_attach_controller" 00:06:38.438 }, 00:06:38.438 { 00:06:38.438 "method": "bdev_wait_for_examine" 00:06:38.438 } 00:06:38.438 ] 00:06:38.438 } 00:06:38.438 ] 00:06:38.438 } 00:06:38.438 [2024-12-11 13:47:31.326000] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:38.438 [2024-12-11 13:47:31.326196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61280 ] 00:06:38.697 [2024-12-11 13:47:31.490994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.697 [2024-12-11 13:47:31.554516] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.697 [2024-12-11 13:47:31.611791] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.697  [2024-12-11T13:47:32.003Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:38.956 00:06:38.956 13:47:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.956 00:06:38.956 real 0m18.236s 00:06:38.956 user 0m12.956s 00:06:38.956 sys 0m7.014s 00:06:38.956 13:47:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.956 ************************************ 00:06:38.956 13:47:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.956 END TEST spdk_dd_basic_rw 00:06:38.956 ************************************ 00:06:38.956 13:47:31 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:38.956 13:47:31 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.956 13:47:31 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.956 13:47:31 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:38.956 ************************************ 00:06:38.956 START TEST spdk_dd_posix 00:06:38.956 ************************************ 00:06:38.956 13:47:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:39.214 * Looking for test storage... 00:06:39.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lcov --version 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:39.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.214 --rc genhtml_branch_coverage=1 00:06:39.214 --rc genhtml_function_coverage=1 00:06:39.214 --rc genhtml_legend=1 00:06:39.214 --rc geninfo_all_blocks=1 00:06:39.214 --rc geninfo_unexecuted_blocks=1 00:06:39.214 00:06:39.214 ' 00:06:39.214 13:47:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:39.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.214 --rc genhtml_branch_coverage=1 00:06:39.214 --rc genhtml_function_coverage=1 00:06:39.214 --rc genhtml_legend=1 00:06:39.214 --rc geninfo_all_blocks=1 00:06:39.214 --rc geninfo_unexecuted_blocks=1 00:06:39.214 00:06:39.214 ' 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:39.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.215 --rc genhtml_branch_coverage=1 00:06:39.215 --rc genhtml_function_coverage=1 00:06:39.215 --rc genhtml_legend=1 00:06:39.215 --rc geninfo_all_blocks=1 00:06:39.215 --rc geninfo_unexecuted_blocks=1 00:06:39.215 00:06:39.215 ' 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:39.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.215 --rc genhtml_branch_coverage=1 00:06:39.215 --rc genhtml_function_coverage=1 00:06:39.215 --rc genhtml_legend=1 00:06:39.215 --rc geninfo_all_blocks=1 00:06:39.215 --rc geninfo_unexecuted_blocks=1 00:06:39.215 00:06:39.215 ' 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:39.215 * First test run, liburing in use 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:39.215 ************************************ 00:06:39.215 START TEST dd_flag_append 00:06:39.215 ************************************ 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=8tucetmzbp1pv2bppv3k0n7dh2xgf8me 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=8npjze9mo3jcy11ohg1f2naxoe7j2b6c 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 8tucetmzbp1pv2bppv3k0n7dh2xgf8me 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 8npjze9mo3jcy11ohg1f2naxoe7j2b6c 00:06:39.215 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:39.473 [2024-12-11 13:47:32.275813] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:39.473 [2024-12-11 13:47:32.275922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61352 ] 00:06:39.473 [2024-12-11 13:47:32.420091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.473 [2024-12-11 13:47:32.482223] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.732 [2024-12-11 13:47:32.538980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.732  [2024-12-11T13:47:33.037Z] Copying: 32/32 [B] (average 31 kBps) 00:06:39.990 00:06:39.990 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 8npjze9mo3jcy11ohg1f2naxoe7j2b6c8tucetmzbp1pv2bppv3k0n7dh2xgf8me == \8\n\p\j\z\e\9\m\o\3\j\c\y\1\1\o\h\g\1\f\2\n\a\x\o\e\7\j\2\b\6\c\8\t\u\c\e\t\m\z\b\p\1\p\v\2\b\p\p\v\3\k\0\n\7\d\h\2\x\g\f\8\m\e ]] 00:06:39.990 00:06:39.990 real 0m0.578s 00:06:39.990 user 0m0.321s 00:06:39.990 sys 0m0.299s 00:06:39.990 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.990 ************************************ 00:06:39.990 END TEST dd_flag_append 00:06:39.990 ************************************ 00:06:39.990 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:39.990 13:47:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:39.990 13:47:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.990 13:47:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.990 13:47:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:39.990 ************************************ 00:06:39.990 START TEST dd_flag_directory 00:06:39.990 ************************************ 00:06:39.990 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:06:39.990 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:39.990 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:06:39.990 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:39.990 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:39.990 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.990 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:39.990 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.990 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:39.990 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.990 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:39.990 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:39.990 13:47:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:39.990 [2024-12-11 13:47:32.907209] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:39.990 [2024-12-11 13:47:32.907329] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61386 ] 00:06:40.249 [2024-12-11 13:47:33.051867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.249 [2024-12-11 13:47:33.109890] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.249 [2024-12-11 13:47:33.162953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.249 [2024-12-11 13:47:33.201703] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:40.249 [2024-12-11 13:47:33.201819] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:40.249 [2024-12-11 13:47:33.201835] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.507 [2024-12-11 13:47:33.322233] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:40.507 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:06:40.507 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:40.507 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:06:40.507 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:06:40.507 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:06:40.507 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:40.507 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:40.507 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:06:40.507 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:40.507 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:40.507 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.507 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:40.507 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.507 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:40.507 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.507 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:40.507 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:40.507 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:40.507 [2024-12-11 13:47:33.470382] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:40.507 [2024-12-11 13:47:33.470529] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61390 ] 00:06:40.765 [2024-12-11 13:47:33.616525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.765 [2024-12-11 13:47:33.675079] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.765 [2024-12-11 13:47:33.731684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.765 [2024-12-11 13:47:33.771001] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:40.765 [2024-12-11 13:47:33.771087] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:40.765 [2024-12-11 13:47:33.771117] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:41.023 [2024-12-11 13:47:33.893344] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:41.023 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:06:41.023 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:41.023 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:06:41.023 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:06:41.023 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:06:41.023 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:41.023 00:06:41.023 real 0m1.129s 00:06:41.023 user 0m0.622s 00:06:41.023 sys 0m0.297s 00:06:41.023 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.023 13:47:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:41.023 ************************************ 00:06:41.023 END TEST dd_flag_directory 00:06:41.023 ************************************ 00:06:41.023 13:47:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:41.023 13:47:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.023 13:47:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.023 13:47:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:41.023 ************************************ 00:06:41.023 START TEST dd_flag_nofollow 00:06:41.023 ************************************ 00:06:41.023 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:06:41.023 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:41.023 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:41.023 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:41.023 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:41.023 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:41.023 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:06:41.023 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:41.023 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.023 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.023 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.023 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.023 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.023 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.023 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.023 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:41.023 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:41.281 [2024-12-11 13:47:34.096420] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:41.281 [2024-12-11 13:47:34.096530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61424 ] 00:06:41.281 [2024-12-11 13:47:34.245141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.281 [2024-12-11 13:47:34.307709] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.539 [2024-12-11 13:47:34.364927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.539 [2024-12-11 13:47:34.404409] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:41.539 [2024-12-11 13:47:34.404470] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:41.539 [2024-12-11 13:47:34.404485] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:41.539 [2024-12-11 13:47:34.526036] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:41.797 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:06:41.797 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:41.797 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:06:41.797 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:06:41.797 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:06:41.797 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:41.797 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:41.797 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:06:41.797 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:41.797 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.797 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.797 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.797 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.797 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.797 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.797 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.797 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:41.797 13:47:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:41.797 [2024-12-11 13:47:34.649375] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:41.797 [2024-12-11 13:47:34.649460] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61435 ] 00:06:41.797 [2024-12-11 13:47:34.790813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.055 [2024-12-11 13:47:34.847829] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.055 [2024-12-11 13:47:34.905313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.055 [2024-12-11 13:47:34.945156] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:42.055 [2024-12-11 13:47:34.945236] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:42.055 [2024-12-11 13:47:34.945267] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.055 [2024-12-11 13:47:35.064699] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:42.313 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:06:42.313 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:42.313 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:06:42.313 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:06:42.313 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:06:42.313 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:42.313 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:42.313 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:42.313 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:42.313 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:42.313 [2024-12-11 13:47:35.211569] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:42.313 [2024-12-11 13:47:35.211684] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61441 ] 00:06:42.313 [2024-12-11 13:47:35.356393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.570 [2024-12-11 13:47:35.420702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.571 [2024-12-11 13:47:35.475853] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.571  [2024-12-11T13:47:35.903Z] Copying: 512/512 [B] (average 500 kBps) 00:06:42.856 00:06:42.856 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ tjxk9w74ejs75cxkr9081zte4ca8mdi7yydfmzqa6pzqaxzmisia5qkmmyxzgeyu44xc4okw1onl8di1mjhiy8g0h1a2l3fcbl3fn71ncxzbl6pomk03gsby5eliwby2hf8bzoqmd027tgsmilmlkxwvll1avmh4skn6t3g3vcecndl27pf90jzqu4ypnhg7c9dt511k1sycv18eedlhl106ndsrz3iwfyc7xeh0ks0xmrjbi5vwpjg8pjnidnefg12u2gmfy41kpcwmnbu09wme9cp9d6jpr64ssbzdqmt479asugbiv84pfyrtg95j0qmnpiuiwiibhn66kpg9kkn1aiskofbnnbs07kwtwyhushfuc3vxjdtn0dz65zahztsjfzvd21oiy6zp9uyxpqh6ito81f5sx97goo7qx76i1hfo8ou5orrgjfcfu1qfsva9rputkj6x1ez5hxfx80htm8smdgic3shboefsuibprh99hxoladu9ocb2bfon == \t\j\x\k\9\w\7\4\e\j\s\7\5\c\x\k\r\9\0\8\1\z\t\e\4\c\a\8\m\d\i\7\y\y\d\f\m\z\q\a\6\p\z\q\a\x\z\m\i\s\i\a\5\q\k\m\m\y\x\z\g\e\y\u\4\4\x\c\4\o\k\w\1\o\n\l\8\d\i\1\m\j\h\i\y\8\g\0\h\1\a\2\l\3\f\c\b\l\3\f\n\7\1\n\c\x\z\b\l\6\p\o\m\k\0\3\g\s\b\y\5\e\l\i\w\b\y\2\h\f\8\b\z\o\q\m\d\0\2\7\t\g\s\m\i\l\m\l\k\x\w\v\l\l\1\a\v\m\h\4\s\k\n\6\t\3\g\3\v\c\e\c\n\d\l\2\7\p\f\9\0\j\z\q\u\4\y\p\n\h\g\7\c\9\d\t\5\1\1\k\1\s\y\c\v\1\8\e\e\d\l\h\l\1\0\6\n\d\s\r\z\3\i\w\f\y\c\7\x\e\h\0\k\s\0\x\m\r\j\b\i\5\v\w\p\j\g\8\p\j\n\i\d\n\e\f\g\1\2\u\2\g\m\f\y\4\1\k\p\c\w\m\n\b\u\0\9\w\m\e\9\c\p\9\d\6\j\p\r\6\4\s\s\b\z\d\q\m\t\4\7\9\a\s\u\g\b\i\v\8\4\p\f\y\r\t\g\9\5\j\0\q\m\n\p\i\u\i\w\i\i\b\h\n\6\6\k\p\g\9\k\k\n\1\a\i\s\k\o\f\b\n\n\b\s\0\7\k\w\t\w\y\h\u\s\h\f\u\c\3\v\x\j\d\t\n\0\d\z\6\5\z\a\h\z\t\s\j\f\z\v\d\2\1\o\i\y\6\z\p\9\u\y\x\p\q\h\6\i\t\o\8\1\f\5\s\x\9\7\g\o\o\7\q\x\7\6\i\1\h\f\o\8\o\u\5\o\r\r\g\j\f\c\f\u\1\q\f\s\v\a\9\r\p\u\t\k\j\6\x\1\e\z\5\h\x\f\x\8\0\h\t\m\8\s\m\d\g\i\c\3\s\h\b\o\e\f\s\u\i\b\p\r\h\9\9\h\x\o\l\a\d\u\9\o\c\b\2\b\f\o\n ]] 00:06:42.856 00:06:42.856 real 0m1.673s 00:06:42.856 user 0m0.913s 00:06:42.856 sys 0m0.576s 00:06:42.856 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.856 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:42.856 ************************************ 00:06:42.856 END TEST dd_flag_nofollow 00:06:42.856 ************************************ 00:06:42.856 13:47:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:42.856 13:47:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.856 13:47:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.856 13:47:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:42.856 ************************************ 00:06:42.857 START TEST dd_flag_noatime 00:06:42.857 ************************************ 00:06:42.857 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:06:42.857 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:42.857 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:42.857 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:42.857 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:42.857 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:42.857 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:42.857 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733924855 00:06:42.857 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:42.857 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733924855 00:06:42.857 13:47:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:43.791 13:47:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:43.791 [2024-12-11 13:47:36.832602] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:43.791 [2024-12-11 13:47:36.832731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61490 ] 00:06:44.049 [2024-12-11 13:47:36.985003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.049 [2024-12-11 13:47:37.038635] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.309 [2024-12-11 13:47:37.099748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.309  [2024-12-11T13:47:37.356Z] Copying: 512/512 [B] (average 500 kBps) 00:06:44.309 00:06:44.309 13:47:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:44.309 13:47:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733924855 )) 00:06:44.309 13:47:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:44.309 13:47:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733924855 )) 00:06:44.309 13:47:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:44.568 [2024-12-11 13:47:37.394685] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:44.569 [2024-12-11 13:47:37.394842] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61498 ] 00:06:44.569 [2024-12-11 13:47:37.539028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.569 [2024-12-11 13:47:37.601081] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.830 [2024-12-11 13:47:37.659589] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.830  [2024-12-11T13:47:38.136Z] Copying: 512/512 [B] (average 500 kBps) 00:06:45.089 00:06:45.089 13:47:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:45.089 13:47:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733924857 )) 00:06:45.089 00:06:45.089 real 0m2.145s 00:06:45.089 user 0m0.611s 00:06:45.089 sys 0m0.588s 00:06:45.089 13:47:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.089 13:47:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:45.089 ************************************ 00:06:45.089 END TEST dd_flag_noatime 00:06:45.089 ************************************ 00:06:45.089 13:47:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:45.089 13:47:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.089 13:47:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.089 13:47:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:45.089 ************************************ 00:06:45.089 START TEST dd_flags_misc 00:06:45.089 ************************************ 00:06:45.089 13:47:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:06:45.089 13:47:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:45.089 13:47:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:45.089 13:47:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:45.089 13:47:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:45.089 13:47:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:45.089 13:47:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:45.089 13:47:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:45.089 13:47:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:45.089 13:47:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:45.089 [2024-12-11 13:47:38.016593] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:45.089 [2024-12-11 13:47:38.016720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61532 ] 00:06:45.349 [2024-12-11 13:47:38.166511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.349 [2024-12-11 13:47:38.224022] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.349 [2024-12-11 13:47:38.280213] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.349  [2024-12-11T13:47:38.655Z] Copying: 512/512 [B] (average 500 kBps) 00:06:45.608 00:06:45.608 13:47:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jog0v7v7xgzvosi3o9vs3f8b2uqtgsyv5rsnjynw83pzuxhlvzznk7m6qrtl6jkxl8fb9i24dl2oyvcm4k8add9iqv2ng6qnfek6rqb63g1z66meol4frfba564spc95rfk5cue5ym3orgqc6cvf65u414jn02e27w5ldmq8331yzjcebluvzx128rtrr5wqdcautesxe5dpdzppzoy33vrggmad6k9qlqnucx3b837keuz8ggukomnsdvd20zavfxl3c7y1xwogp039535cjj155m5104879gatcpn7heg1l20q5mlu914zmxvhikdezcmdhvdeilmbst5mku3qnfauh1n6n3vg413vl4u3znn6lq74t0yj41q62a95dn89g0yh5suhwfxc0klwutx3ydld3in4csq43lvhkzz7hljoxiyrvb3dhm60fck8gmnm8ja9uhxwqfen4gi18eks6gcyi297w9urdvr0yhd2jupidfqk1ezdhjff1puz0d8b == \j\o\g\0\v\7\v\7\x\g\z\v\o\s\i\3\o\9\v\s\3\f\8\b\2\u\q\t\g\s\y\v\5\r\s\n\j\y\n\w\8\3\p\z\u\x\h\l\v\z\z\n\k\7\m\6\q\r\t\l\6\j\k\x\l\8\f\b\9\i\2\4\d\l\2\o\y\v\c\m\4\k\8\a\d\d\9\i\q\v\2\n\g\6\q\n\f\e\k\6\r\q\b\6\3\g\1\z\6\6\m\e\o\l\4\f\r\f\b\a\5\6\4\s\p\c\9\5\r\f\k\5\c\u\e\5\y\m\3\o\r\g\q\c\6\c\v\f\6\5\u\4\1\4\j\n\0\2\e\2\7\w\5\l\d\m\q\8\3\3\1\y\z\j\c\e\b\l\u\v\z\x\1\2\8\r\t\r\r\5\w\q\d\c\a\u\t\e\s\x\e\5\d\p\d\z\p\p\z\o\y\3\3\v\r\g\g\m\a\d\6\k\9\q\l\q\n\u\c\x\3\b\8\3\7\k\e\u\z\8\g\g\u\k\o\m\n\s\d\v\d\2\0\z\a\v\f\x\l\3\c\7\y\1\x\w\o\g\p\0\3\9\5\3\5\c\j\j\1\5\5\m\5\1\0\4\8\7\9\g\a\t\c\p\n\7\h\e\g\1\l\2\0\q\5\m\l\u\9\1\4\z\m\x\v\h\i\k\d\e\z\c\m\d\h\v\d\e\i\l\m\b\s\t\5\m\k\u\3\q\n\f\a\u\h\1\n\6\n\3\v\g\4\1\3\v\l\4\u\3\z\n\n\6\l\q\7\4\t\0\y\j\4\1\q\6\2\a\9\5\d\n\8\9\g\0\y\h\5\s\u\h\w\f\x\c\0\k\l\w\u\t\x\3\y\d\l\d\3\i\n\4\c\s\q\4\3\l\v\h\k\z\z\7\h\l\j\o\x\i\y\r\v\b\3\d\h\m\6\0\f\c\k\8\g\m\n\m\8\j\a\9\u\h\x\w\q\f\e\n\4\g\i\1\8\e\k\s\6\g\c\y\i\2\9\7\w\9\u\r\d\v\r\0\y\h\d\2\j\u\p\i\d\f\q\k\1\e\z\d\h\j\f\f\1\p\u\z\0\d\8\b ]] 00:06:45.608 13:47:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:45.608 13:47:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:45.608 [2024-12-11 13:47:38.565536] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:45.608 [2024-12-11 13:47:38.565653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61538 ] 00:06:45.867 [2024-12-11 13:47:38.714263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.867 [2024-12-11 13:47:38.774505] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.867 [2024-12-11 13:47:38.833292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.867  [2024-12-11T13:47:39.174Z] Copying: 512/512 [B] (average 500 kBps) 00:06:46.127 00:06:46.127 13:47:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jog0v7v7xgzvosi3o9vs3f8b2uqtgsyv5rsnjynw83pzuxhlvzznk7m6qrtl6jkxl8fb9i24dl2oyvcm4k8add9iqv2ng6qnfek6rqb63g1z66meol4frfba564spc95rfk5cue5ym3orgqc6cvf65u414jn02e27w5ldmq8331yzjcebluvzx128rtrr5wqdcautesxe5dpdzppzoy33vrggmad6k9qlqnucx3b837keuz8ggukomnsdvd20zavfxl3c7y1xwogp039535cjj155m5104879gatcpn7heg1l20q5mlu914zmxvhikdezcmdhvdeilmbst5mku3qnfauh1n6n3vg413vl4u3znn6lq74t0yj41q62a95dn89g0yh5suhwfxc0klwutx3ydld3in4csq43lvhkzz7hljoxiyrvb3dhm60fck8gmnm8ja9uhxwqfen4gi18eks6gcyi297w9urdvr0yhd2jupidfqk1ezdhjff1puz0d8b == \j\o\g\0\v\7\v\7\x\g\z\v\o\s\i\3\o\9\v\s\3\f\8\b\2\u\q\t\g\s\y\v\5\r\s\n\j\y\n\w\8\3\p\z\u\x\h\l\v\z\z\n\k\7\m\6\q\r\t\l\6\j\k\x\l\8\f\b\9\i\2\4\d\l\2\o\y\v\c\m\4\k\8\a\d\d\9\i\q\v\2\n\g\6\q\n\f\e\k\6\r\q\b\6\3\g\1\z\6\6\m\e\o\l\4\f\r\f\b\a\5\6\4\s\p\c\9\5\r\f\k\5\c\u\e\5\y\m\3\o\r\g\q\c\6\c\v\f\6\5\u\4\1\4\j\n\0\2\e\2\7\w\5\l\d\m\q\8\3\3\1\y\z\j\c\e\b\l\u\v\z\x\1\2\8\r\t\r\r\5\w\q\d\c\a\u\t\e\s\x\e\5\d\p\d\z\p\p\z\o\y\3\3\v\r\g\g\m\a\d\6\k\9\q\l\q\n\u\c\x\3\b\8\3\7\k\e\u\z\8\g\g\u\k\o\m\n\s\d\v\d\2\0\z\a\v\f\x\l\3\c\7\y\1\x\w\o\g\p\0\3\9\5\3\5\c\j\j\1\5\5\m\5\1\0\4\8\7\9\g\a\t\c\p\n\7\h\e\g\1\l\2\0\q\5\m\l\u\9\1\4\z\m\x\v\h\i\k\d\e\z\c\m\d\h\v\d\e\i\l\m\b\s\t\5\m\k\u\3\q\n\f\a\u\h\1\n\6\n\3\v\g\4\1\3\v\l\4\u\3\z\n\n\6\l\q\7\4\t\0\y\j\4\1\q\6\2\a\9\5\d\n\8\9\g\0\y\h\5\s\u\h\w\f\x\c\0\k\l\w\u\t\x\3\y\d\l\d\3\i\n\4\c\s\q\4\3\l\v\h\k\z\z\7\h\l\j\o\x\i\y\r\v\b\3\d\h\m\6\0\f\c\k\8\g\m\n\m\8\j\a\9\u\h\x\w\q\f\e\n\4\g\i\1\8\e\k\s\6\g\c\y\i\2\9\7\w\9\u\r\d\v\r\0\y\h\d\2\j\u\p\i\d\f\q\k\1\e\z\d\h\j\f\f\1\p\u\z\0\d\8\b ]] 00:06:46.127 13:47:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:46.127 13:47:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:46.127 [2024-12-11 13:47:39.119726] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:46.127 [2024-12-11 13:47:39.119824] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61555 ] 00:06:46.390 [2024-12-11 13:47:39.262953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.390 [2024-12-11 13:47:39.311050] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.390 [2024-12-11 13:47:39.368893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.390  [2024-12-11T13:47:39.696Z] Copying: 512/512 [B] (average 500 kBps) 00:06:46.649 00:06:46.649 13:47:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jog0v7v7xgzvosi3o9vs3f8b2uqtgsyv5rsnjynw83pzuxhlvzznk7m6qrtl6jkxl8fb9i24dl2oyvcm4k8add9iqv2ng6qnfek6rqb63g1z66meol4frfba564spc95rfk5cue5ym3orgqc6cvf65u414jn02e27w5ldmq8331yzjcebluvzx128rtrr5wqdcautesxe5dpdzppzoy33vrggmad6k9qlqnucx3b837keuz8ggukomnsdvd20zavfxl3c7y1xwogp039535cjj155m5104879gatcpn7heg1l20q5mlu914zmxvhikdezcmdhvdeilmbst5mku3qnfauh1n6n3vg413vl4u3znn6lq74t0yj41q62a95dn89g0yh5suhwfxc0klwutx3ydld3in4csq43lvhkzz7hljoxiyrvb3dhm60fck8gmnm8ja9uhxwqfen4gi18eks6gcyi297w9urdvr0yhd2jupidfqk1ezdhjff1puz0d8b == \j\o\g\0\v\7\v\7\x\g\z\v\o\s\i\3\o\9\v\s\3\f\8\b\2\u\q\t\g\s\y\v\5\r\s\n\j\y\n\w\8\3\p\z\u\x\h\l\v\z\z\n\k\7\m\6\q\r\t\l\6\j\k\x\l\8\f\b\9\i\2\4\d\l\2\o\y\v\c\m\4\k\8\a\d\d\9\i\q\v\2\n\g\6\q\n\f\e\k\6\r\q\b\6\3\g\1\z\6\6\m\e\o\l\4\f\r\f\b\a\5\6\4\s\p\c\9\5\r\f\k\5\c\u\e\5\y\m\3\o\r\g\q\c\6\c\v\f\6\5\u\4\1\4\j\n\0\2\e\2\7\w\5\l\d\m\q\8\3\3\1\y\z\j\c\e\b\l\u\v\z\x\1\2\8\r\t\r\r\5\w\q\d\c\a\u\t\e\s\x\e\5\d\p\d\z\p\p\z\o\y\3\3\v\r\g\g\m\a\d\6\k\9\q\l\q\n\u\c\x\3\b\8\3\7\k\e\u\z\8\g\g\u\k\o\m\n\s\d\v\d\2\0\z\a\v\f\x\l\3\c\7\y\1\x\w\o\g\p\0\3\9\5\3\5\c\j\j\1\5\5\m\5\1\0\4\8\7\9\g\a\t\c\p\n\7\h\e\g\1\l\2\0\q\5\m\l\u\9\1\4\z\m\x\v\h\i\k\d\e\z\c\m\d\h\v\d\e\i\l\m\b\s\t\5\m\k\u\3\q\n\f\a\u\h\1\n\6\n\3\v\g\4\1\3\v\l\4\u\3\z\n\n\6\l\q\7\4\t\0\y\j\4\1\q\6\2\a\9\5\d\n\8\9\g\0\y\h\5\s\u\h\w\f\x\c\0\k\l\w\u\t\x\3\y\d\l\d\3\i\n\4\c\s\q\4\3\l\v\h\k\z\z\7\h\l\j\o\x\i\y\r\v\b\3\d\h\m\6\0\f\c\k\8\g\m\n\m\8\j\a\9\u\h\x\w\q\f\e\n\4\g\i\1\8\e\k\s\6\g\c\y\i\2\9\7\w\9\u\r\d\v\r\0\y\h\d\2\j\u\p\i\d\f\q\k\1\e\z\d\h\j\f\f\1\p\u\z\0\d\8\b ]] 00:06:46.649 13:47:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:46.649 13:47:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:46.649 [2024-12-11 13:47:39.654067] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:46.649 [2024-12-11 13:47:39.654183] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61559 ] 00:06:46.909 [2024-12-11 13:47:39.802345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.909 [2024-12-11 13:47:39.865047] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.909 [2024-12-11 13:47:39.922925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.167  [2024-12-11T13:47:40.214Z] Copying: 512/512 [B] (average 125 kBps) 00:06:47.167 00:06:47.167 13:47:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jog0v7v7xgzvosi3o9vs3f8b2uqtgsyv5rsnjynw83pzuxhlvzznk7m6qrtl6jkxl8fb9i24dl2oyvcm4k8add9iqv2ng6qnfek6rqb63g1z66meol4frfba564spc95rfk5cue5ym3orgqc6cvf65u414jn02e27w5ldmq8331yzjcebluvzx128rtrr5wqdcautesxe5dpdzppzoy33vrggmad6k9qlqnucx3b837keuz8ggukomnsdvd20zavfxl3c7y1xwogp039535cjj155m5104879gatcpn7heg1l20q5mlu914zmxvhikdezcmdhvdeilmbst5mku3qnfauh1n6n3vg413vl4u3znn6lq74t0yj41q62a95dn89g0yh5suhwfxc0klwutx3ydld3in4csq43lvhkzz7hljoxiyrvb3dhm60fck8gmnm8ja9uhxwqfen4gi18eks6gcyi297w9urdvr0yhd2jupidfqk1ezdhjff1puz0d8b == \j\o\g\0\v\7\v\7\x\g\z\v\o\s\i\3\o\9\v\s\3\f\8\b\2\u\q\t\g\s\y\v\5\r\s\n\j\y\n\w\8\3\p\z\u\x\h\l\v\z\z\n\k\7\m\6\q\r\t\l\6\j\k\x\l\8\f\b\9\i\2\4\d\l\2\o\y\v\c\m\4\k\8\a\d\d\9\i\q\v\2\n\g\6\q\n\f\e\k\6\r\q\b\6\3\g\1\z\6\6\m\e\o\l\4\f\r\f\b\a\5\6\4\s\p\c\9\5\r\f\k\5\c\u\e\5\y\m\3\o\r\g\q\c\6\c\v\f\6\5\u\4\1\4\j\n\0\2\e\2\7\w\5\l\d\m\q\8\3\3\1\y\z\j\c\e\b\l\u\v\z\x\1\2\8\r\t\r\r\5\w\q\d\c\a\u\t\e\s\x\e\5\d\p\d\z\p\p\z\o\y\3\3\v\r\g\g\m\a\d\6\k\9\q\l\q\n\u\c\x\3\b\8\3\7\k\e\u\z\8\g\g\u\k\o\m\n\s\d\v\d\2\0\z\a\v\f\x\l\3\c\7\y\1\x\w\o\g\p\0\3\9\5\3\5\c\j\j\1\5\5\m\5\1\0\4\8\7\9\g\a\t\c\p\n\7\h\e\g\1\l\2\0\q\5\m\l\u\9\1\4\z\m\x\v\h\i\k\d\e\z\c\m\d\h\v\d\e\i\l\m\b\s\t\5\m\k\u\3\q\n\f\a\u\h\1\n\6\n\3\v\g\4\1\3\v\l\4\u\3\z\n\n\6\l\q\7\4\t\0\y\j\4\1\q\6\2\a\9\5\d\n\8\9\g\0\y\h\5\s\u\h\w\f\x\c\0\k\l\w\u\t\x\3\y\d\l\d\3\i\n\4\c\s\q\4\3\l\v\h\k\z\z\7\h\l\j\o\x\i\y\r\v\b\3\d\h\m\6\0\f\c\k\8\g\m\n\m\8\j\a\9\u\h\x\w\q\f\e\n\4\g\i\1\8\e\k\s\6\g\c\y\i\2\9\7\w\9\u\r\d\v\r\0\y\h\d\2\j\u\p\i\d\f\q\k\1\e\z\d\h\j\f\f\1\p\u\z\0\d\8\b ]] 00:06:47.167 13:47:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:47.167 13:47:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:47.167 13:47:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:47.167 13:47:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:47.167 13:47:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:47.167 13:47:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:47.425 [2024-12-11 13:47:40.223981] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:47.425 [2024-12-11 13:47:40.224075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61574 ] 00:06:47.425 [2024-12-11 13:47:40.366049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.425 [2024-12-11 13:47:40.422837] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.700 [2024-12-11 13:47:40.485027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.700  [2024-12-11T13:47:40.747Z] Copying: 512/512 [B] (average 500 kBps) 00:06:47.700 00:06:47.700 13:47:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ r3zbk7gxw5m7s7qgkhqrafu6t51otvlgf92lehow0imbf8kjqpfb4s1c791rveo8zpwa0g0tdt4l8jx8p85bx2cw3d08k5l242bet1jgrf5ctb2ua217bhnvqodxdr8m8jmigq6te8kwwhjkwh6ol3g2slirgutdm9wxdr9l8qirmdh1f3p9vzl2o7p8zql0m7lnug45aevbnjstrh7kzown49vi41fclp10plikk3tiui56in84xzu79rj95h5jdz4sfkeyciw9qhh9iwbhy8bnhhhod14we6sf50xonu1tz5ev1zcvt1f9u58v9k5558992vbmtq0ahfpzlsus8mxrrjwumestschdgct275z896folo693idzxbhz3w8nin11oenn8d8606ijybxgfojvhyhppdhp39658syxp3muveh6wbesjryw3vkoclf9tpab56yusiscmj5w5vgywtigpfa5tjgb0jmhn8stnc2eh9zafkr7dybjt0vaa5c1 == \r\3\z\b\k\7\g\x\w\5\m\7\s\7\q\g\k\h\q\r\a\f\u\6\t\5\1\o\t\v\l\g\f\9\2\l\e\h\o\w\0\i\m\b\f\8\k\j\q\p\f\b\4\s\1\c\7\9\1\r\v\e\o\8\z\p\w\a\0\g\0\t\d\t\4\l\8\j\x\8\p\8\5\b\x\2\c\w\3\d\0\8\k\5\l\2\4\2\b\e\t\1\j\g\r\f\5\c\t\b\2\u\a\2\1\7\b\h\n\v\q\o\d\x\d\r\8\m\8\j\m\i\g\q\6\t\e\8\k\w\w\h\j\k\w\h\6\o\l\3\g\2\s\l\i\r\g\u\t\d\m\9\w\x\d\r\9\l\8\q\i\r\m\d\h\1\f\3\p\9\v\z\l\2\o\7\p\8\z\q\l\0\m\7\l\n\u\g\4\5\a\e\v\b\n\j\s\t\r\h\7\k\z\o\w\n\4\9\v\i\4\1\f\c\l\p\1\0\p\l\i\k\k\3\t\i\u\i\5\6\i\n\8\4\x\z\u\7\9\r\j\9\5\h\5\j\d\z\4\s\f\k\e\y\c\i\w\9\q\h\h\9\i\w\b\h\y\8\b\n\h\h\h\o\d\1\4\w\e\6\s\f\5\0\x\o\n\u\1\t\z\5\e\v\1\z\c\v\t\1\f\9\u\5\8\v\9\k\5\5\5\8\9\9\2\v\b\m\t\q\0\a\h\f\p\z\l\s\u\s\8\m\x\r\r\j\w\u\m\e\s\t\s\c\h\d\g\c\t\2\7\5\z\8\9\6\f\o\l\o\6\9\3\i\d\z\x\b\h\z\3\w\8\n\i\n\1\1\o\e\n\n\8\d\8\6\0\6\i\j\y\b\x\g\f\o\j\v\h\y\h\p\p\d\h\p\3\9\6\5\8\s\y\x\p\3\m\u\v\e\h\6\w\b\e\s\j\r\y\w\3\v\k\o\c\l\f\9\t\p\a\b\5\6\y\u\s\i\s\c\m\j\5\w\5\v\g\y\w\t\i\g\p\f\a\5\t\j\g\b\0\j\m\h\n\8\s\t\n\c\2\e\h\9\z\a\f\k\r\7\d\y\b\j\t\0\v\a\a\5\c\1 ]] 00:06:47.700 13:47:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:47.700 13:47:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:47.981 [2024-12-11 13:47:40.771335] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:47.981 [2024-12-11 13:47:40.771441] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61583 ] 00:06:47.981 [2024-12-11 13:47:40.916211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.981 [2024-12-11 13:47:40.975608] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.239 [2024-12-11 13:47:41.031962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.239  [2024-12-11T13:47:41.286Z] Copying: 512/512 [B] (average 500 kBps) 00:06:48.239 00:06:48.239 13:47:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ r3zbk7gxw5m7s7qgkhqrafu6t51otvlgf92lehow0imbf8kjqpfb4s1c791rveo8zpwa0g0tdt4l8jx8p85bx2cw3d08k5l242bet1jgrf5ctb2ua217bhnvqodxdr8m8jmigq6te8kwwhjkwh6ol3g2slirgutdm9wxdr9l8qirmdh1f3p9vzl2o7p8zql0m7lnug45aevbnjstrh7kzown49vi41fclp10plikk3tiui56in84xzu79rj95h5jdz4sfkeyciw9qhh9iwbhy8bnhhhod14we6sf50xonu1tz5ev1zcvt1f9u58v9k5558992vbmtq0ahfpzlsus8mxrrjwumestschdgct275z896folo693idzxbhz3w8nin11oenn8d8606ijybxgfojvhyhppdhp39658syxp3muveh6wbesjryw3vkoclf9tpab56yusiscmj5w5vgywtigpfa5tjgb0jmhn8stnc2eh9zafkr7dybjt0vaa5c1 == \r\3\z\b\k\7\g\x\w\5\m\7\s\7\q\g\k\h\q\r\a\f\u\6\t\5\1\o\t\v\l\g\f\9\2\l\e\h\o\w\0\i\m\b\f\8\k\j\q\p\f\b\4\s\1\c\7\9\1\r\v\e\o\8\z\p\w\a\0\g\0\t\d\t\4\l\8\j\x\8\p\8\5\b\x\2\c\w\3\d\0\8\k\5\l\2\4\2\b\e\t\1\j\g\r\f\5\c\t\b\2\u\a\2\1\7\b\h\n\v\q\o\d\x\d\r\8\m\8\j\m\i\g\q\6\t\e\8\k\w\w\h\j\k\w\h\6\o\l\3\g\2\s\l\i\r\g\u\t\d\m\9\w\x\d\r\9\l\8\q\i\r\m\d\h\1\f\3\p\9\v\z\l\2\o\7\p\8\z\q\l\0\m\7\l\n\u\g\4\5\a\e\v\b\n\j\s\t\r\h\7\k\z\o\w\n\4\9\v\i\4\1\f\c\l\p\1\0\p\l\i\k\k\3\t\i\u\i\5\6\i\n\8\4\x\z\u\7\9\r\j\9\5\h\5\j\d\z\4\s\f\k\e\y\c\i\w\9\q\h\h\9\i\w\b\h\y\8\b\n\h\h\h\o\d\1\4\w\e\6\s\f\5\0\x\o\n\u\1\t\z\5\e\v\1\z\c\v\t\1\f\9\u\5\8\v\9\k\5\5\5\8\9\9\2\v\b\m\t\q\0\a\h\f\p\z\l\s\u\s\8\m\x\r\r\j\w\u\m\e\s\t\s\c\h\d\g\c\t\2\7\5\z\8\9\6\f\o\l\o\6\9\3\i\d\z\x\b\h\z\3\w\8\n\i\n\1\1\o\e\n\n\8\d\8\6\0\6\i\j\y\b\x\g\f\o\j\v\h\y\h\p\p\d\h\p\3\9\6\5\8\s\y\x\p\3\m\u\v\e\h\6\w\b\e\s\j\r\y\w\3\v\k\o\c\l\f\9\t\p\a\b\5\6\y\u\s\i\s\c\m\j\5\w\5\v\g\y\w\t\i\g\p\f\a\5\t\j\g\b\0\j\m\h\n\8\s\t\n\c\2\e\h\9\z\a\f\k\r\7\d\y\b\j\t\0\v\a\a\5\c\1 ]] 00:06:48.239 13:47:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:48.239 13:47:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:48.497 [2024-12-11 13:47:41.333082] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:48.497 [2024-12-11 13:47:41.333231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61593 ] 00:06:48.497 [2024-12-11 13:47:41.480181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.497 [2024-12-11 13:47:41.537872] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.756 [2024-12-11 13:47:41.592520] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.756  [2024-12-11T13:47:42.061Z] Copying: 512/512 [B] (average 250 kBps) 00:06:49.014 00:06:49.015 13:47:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ r3zbk7gxw5m7s7qgkhqrafu6t51otvlgf92lehow0imbf8kjqpfb4s1c791rveo8zpwa0g0tdt4l8jx8p85bx2cw3d08k5l242bet1jgrf5ctb2ua217bhnvqodxdr8m8jmigq6te8kwwhjkwh6ol3g2slirgutdm9wxdr9l8qirmdh1f3p9vzl2o7p8zql0m7lnug45aevbnjstrh7kzown49vi41fclp10plikk3tiui56in84xzu79rj95h5jdz4sfkeyciw9qhh9iwbhy8bnhhhod14we6sf50xonu1tz5ev1zcvt1f9u58v9k5558992vbmtq0ahfpzlsus8mxrrjwumestschdgct275z896folo693idzxbhz3w8nin11oenn8d8606ijybxgfojvhyhppdhp39658syxp3muveh6wbesjryw3vkoclf9tpab56yusiscmj5w5vgywtigpfa5tjgb0jmhn8stnc2eh9zafkr7dybjt0vaa5c1 == \r\3\z\b\k\7\g\x\w\5\m\7\s\7\q\g\k\h\q\r\a\f\u\6\t\5\1\o\t\v\l\g\f\9\2\l\e\h\o\w\0\i\m\b\f\8\k\j\q\p\f\b\4\s\1\c\7\9\1\r\v\e\o\8\z\p\w\a\0\g\0\t\d\t\4\l\8\j\x\8\p\8\5\b\x\2\c\w\3\d\0\8\k\5\l\2\4\2\b\e\t\1\j\g\r\f\5\c\t\b\2\u\a\2\1\7\b\h\n\v\q\o\d\x\d\r\8\m\8\j\m\i\g\q\6\t\e\8\k\w\w\h\j\k\w\h\6\o\l\3\g\2\s\l\i\r\g\u\t\d\m\9\w\x\d\r\9\l\8\q\i\r\m\d\h\1\f\3\p\9\v\z\l\2\o\7\p\8\z\q\l\0\m\7\l\n\u\g\4\5\a\e\v\b\n\j\s\t\r\h\7\k\z\o\w\n\4\9\v\i\4\1\f\c\l\p\1\0\p\l\i\k\k\3\t\i\u\i\5\6\i\n\8\4\x\z\u\7\9\r\j\9\5\h\5\j\d\z\4\s\f\k\e\y\c\i\w\9\q\h\h\9\i\w\b\h\y\8\b\n\h\h\h\o\d\1\4\w\e\6\s\f\5\0\x\o\n\u\1\t\z\5\e\v\1\z\c\v\t\1\f\9\u\5\8\v\9\k\5\5\5\8\9\9\2\v\b\m\t\q\0\a\h\f\p\z\l\s\u\s\8\m\x\r\r\j\w\u\m\e\s\t\s\c\h\d\g\c\t\2\7\5\z\8\9\6\f\o\l\o\6\9\3\i\d\z\x\b\h\z\3\w\8\n\i\n\1\1\o\e\n\n\8\d\8\6\0\6\i\j\y\b\x\g\f\o\j\v\h\y\h\p\p\d\h\p\3\9\6\5\8\s\y\x\p\3\m\u\v\e\h\6\w\b\e\s\j\r\y\w\3\v\k\o\c\l\f\9\t\p\a\b\5\6\y\u\s\i\s\c\m\j\5\w\5\v\g\y\w\t\i\g\p\f\a\5\t\j\g\b\0\j\m\h\n\8\s\t\n\c\2\e\h\9\z\a\f\k\r\7\d\y\b\j\t\0\v\a\a\5\c\1 ]] 00:06:49.015 13:47:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:49.015 13:47:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:49.015 [2024-12-11 13:47:41.884874] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:49.015 [2024-12-11 13:47:41.884995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61603 ] 00:06:49.015 [2024-12-11 13:47:42.030314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.273 [2024-12-11 13:47:42.081018] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.273 [2024-12-11 13:47:42.137267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.273  [2024-12-11T13:47:42.579Z] Copying: 512/512 [B] (average 250 kBps) 00:06:49.532 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ r3zbk7gxw5m7s7qgkhqrafu6t51otvlgf92lehow0imbf8kjqpfb4s1c791rveo8zpwa0g0tdt4l8jx8p85bx2cw3d08k5l242bet1jgrf5ctb2ua217bhnvqodxdr8m8jmigq6te8kwwhjkwh6ol3g2slirgutdm9wxdr9l8qirmdh1f3p9vzl2o7p8zql0m7lnug45aevbnjstrh7kzown49vi41fclp10plikk3tiui56in84xzu79rj95h5jdz4sfkeyciw9qhh9iwbhy8bnhhhod14we6sf50xonu1tz5ev1zcvt1f9u58v9k5558992vbmtq0ahfpzlsus8mxrrjwumestschdgct275z896folo693idzxbhz3w8nin11oenn8d8606ijybxgfojvhyhppdhp39658syxp3muveh6wbesjryw3vkoclf9tpab56yusiscmj5w5vgywtigpfa5tjgb0jmhn8stnc2eh9zafkr7dybjt0vaa5c1 == \r\3\z\b\k\7\g\x\w\5\m\7\s\7\q\g\k\h\q\r\a\f\u\6\t\5\1\o\t\v\l\g\f\9\2\l\e\h\o\w\0\i\m\b\f\8\k\j\q\p\f\b\4\s\1\c\7\9\1\r\v\e\o\8\z\p\w\a\0\g\0\t\d\t\4\l\8\j\x\8\p\8\5\b\x\2\c\w\3\d\0\8\k\5\l\2\4\2\b\e\t\1\j\g\r\f\5\c\t\b\2\u\a\2\1\7\b\h\n\v\q\o\d\x\d\r\8\m\8\j\m\i\g\q\6\t\e\8\k\w\w\h\j\k\w\h\6\o\l\3\g\2\s\l\i\r\g\u\t\d\m\9\w\x\d\r\9\l\8\q\i\r\m\d\h\1\f\3\p\9\v\z\l\2\o\7\p\8\z\q\l\0\m\7\l\n\u\g\4\5\a\e\v\b\n\j\s\t\r\h\7\k\z\o\w\n\4\9\v\i\4\1\f\c\l\p\1\0\p\l\i\k\k\3\t\i\u\i\5\6\i\n\8\4\x\z\u\7\9\r\j\9\5\h\5\j\d\z\4\s\f\k\e\y\c\i\w\9\q\h\h\9\i\w\b\h\y\8\b\n\h\h\h\o\d\1\4\w\e\6\s\f\5\0\x\o\n\u\1\t\z\5\e\v\1\z\c\v\t\1\f\9\u\5\8\v\9\k\5\5\5\8\9\9\2\v\b\m\t\q\0\a\h\f\p\z\l\s\u\s\8\m\x\r\r\j\w\u\m\e\s\t\s\c\h\d\g\c\t\2\7\5\z\8\9\6\f\o\l\o\6\9\3\i\d\z\x\b\h\z\3\w\8\n\i\n\1\1\o\e\n\n\8\d\8\6\0\6\i\j\y\b\x\g\f\o\j\v\h\y\h\p\p\d\h\p\3\9\6\5\8\s\y\x\p\3\m\u\v\e\h\6\w\b\e\s\j\r\y\w\3\v\k\o\c\l\f\9\t\p\a\b\5\6\y\u\s\i\s\c\m\j\5\w\5\v\g\y\w\t\i\g\p\f\a\5\t\j\g\b\0\j\m\h\n\8\s\t\n\c\2\e\h\9\z\a\f\k\r\7\d\y\b\j\t\0\v\a\a\5\c\1 ]] 00:06:49.532 00:06:49.532 real 0m4.411s 00:06:49.532 user 0m2.410s 00:06:49.532 sys 0m2.261s 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.532 ************************************ 00:06:49.532 END TEST dd_flags_misc 00:06:49.532 ************************************ 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:49.532 * Second test run, disabling liburing, forcing AIO 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:49.532 ************************************ 00:06:49.532 START TEST dd_flag_append_forced_aio 00:06:49.532 ************************************ 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=nqp2unkdn3fc7dmkqdcftkzhfar65fyy 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=qkldck3vj3aferxj0zopac40t6572dps 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s nqp2unkdn3fc7dmkqdcftkzhfar65fyy 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s qkldck3vj3aferxj0zopac40t6572dps 00:06:49.532 13:47:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:49.532 [2024-12-11 13:47:42.477608] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:49.532 [2024-12-11 13:47:42.477754] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61631 ] 00:06:49.791 [2024-12-11 13:47:42.626746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.791 [2024-12-11 13:47:42.685335] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.791 [2024-12-11 13:47:42.744171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.791  [2024-12-11T13:47:43.097Z] Copying: 32/32 [B] (average 31 kBps) 00:06:50.050 00:06:50.050 13:47:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ qkldck3vj3aferxj0zopac40t6572dpsnqp2unkdn3fc7dmkqdcftkzhfar65fyy == \q\k\l\d\c\k\3\v\j\3\a\f\e\r\x\j\0\z\o\p\a\c\4\0\t\6\5\7\2\d\p\s\n\q\p\2\u\n\k\d\n\3\f\c\7\d\m\k\q\d\c\f\t\k\z\h\f\a\r\6\5\f\y\y ]] 00:06:50.050 00:06:50.050 real 0m0.573s 00:06:50.050 user 0m0.305s 00:06:50.050 sys 0m0.150s 00:06:50.050 13:47:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.050 13:47:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:50.050 ************************************ 00:06:50.050 END TEST dd_flag_append_forced_aio 00:06:50.050 ************************************ 00:06:50.050 13:47:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:50.050 13:47:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.050 13:47:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.050 13:47:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:50.050 ************************************ 00:06:50.050 START TEST dd_flag_directory_forced_aio 00:06:50.050 ************************************ 00:06:50.050 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:06:50.050 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:50.050 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:50.050 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:50.050 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.050 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.050 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.050 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.050 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.050 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.050 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.050 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:50.050 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:50.309 [2024-12-11 13:47:43.102695] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:50.309 [2024-12-11 13:47:43.102810] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61663 ] 00:06:50.309 [2024-12-11 13:47:43.254220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.309 [2024-12-11 13:47:43.317279] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.568 [2024-12-11 13:47:43.378210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.568 [2024-12-11 13:47:43.419309] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:50.568 [2024-12-11 13:47:43.419391] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:50.568 [2024-12-11 13:47:43.419415] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.568 [2024-12-11 13:47:43.539512] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:50.568 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:50.568 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.568 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:50.568 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:50.568 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:50.568 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.568 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:50.568 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:50.568 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:50.568 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.568 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.568 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.568 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.568 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.826 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.826 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.826 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:50.826 13:47:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:50.826 [2024-12-11 13:47:43.659211] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:50.826 [2024-12-11 13:47:43.659325] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61667 ] 00:06:50.826 [2024-12-11 13:47:43.802246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.826 [2024-12-11 13:47:43.858936] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.085 [2024-12-11 13:47:43.916111] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.085 [2024-12-11 13:47:43.953371] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:51.085 [2024-12-11 13:47:43.953449] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:51.085 [2024-12-11 13:47:43.953483] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.085 [2024-12-11 13:47:44.073404] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.343 00:06:51.343 real 0m1.106s 00:06:51.343 user 0m0.609s 00:06:51.343 sys 0m0.285s 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:51.343 ************************************ 00:06:51.343 END TEST dd_flag_directory_forced_aio 00:06:51.343 ************************************ 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:51.343 ************************************ 00:06:51.343 START TEST dd_flag_nofollow_forced_aio 00:06:51.343 ************************************ 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.343 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.343 [2024-12-11 13:47:44.271170] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:51.343 [2024-12-11 13:47:44.271277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61701 ] 00:06:51.602 [2024-12-11 13:47:44.418565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.602 [2024-12-11 13:47:44.473490] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.602 [2024-12-11 13:47:44.532103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.602 [2024-12-11 13:47:44.572296] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:51.602 [2024-12-11 13:47:44.572380] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:51.602 [2024-12-11 13:47:44.572398] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.864 [2024-12-11 13:47:44.695258] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:51.864 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:51.864 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.864 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:51.864 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:51.864 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:51.864 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.864 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:51.864 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:51.864 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:51.864 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.864 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.864 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.864 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.864 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.864 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.864 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.864 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.864 13:47:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:51.864 [2024-12-11 13:47:44.822467] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:51.864 [2024-12-11 13:47:44.822565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61705 ] 00:06:52.125 [2024-12-11 13:47:44.969590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.125 [2024-12-11 13:47:45.018798] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.125 [2024-12-11 13:47:45.072903] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.125 [2024-12-11 13:47:45.111157] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:52.125 [2024-12-11 13:47:45.111222] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:52.125 [2024-12-11 13:47:45.111241] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.382 [2024-12-11 13:47:45.237510] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:52.383 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:52.383 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.383 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:52.383 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:52.383 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:52.383 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.383 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:52.383 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:52.383 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:52.383 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:52.383 [2024-12-11 13:47:45.369704] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:52.383 [2024-12-11 13:47:45.369849] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61718 ] 00:06:52.641 [2024-12-11 13:47:45.510034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.641 [2024-12-11 13:47:45.557400] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.641 [2024-12-11 13:47:45.613233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.641  [2024-12-11T13:47:45.946Z] Copying: 512/512 [B] (average 500 kBps) 00:06:52.899 00:06:52.899 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ fxijh4sphmmxdg7w950ofqrgun6b9d8yb85zjzv2vrf2xdpja93szjyfjf4njji3mok8piyzawyh4nwcv93vzb0ijq5rzosf7m5iavzln50vb88nbkd1ami2zbafkraimaq6u2dbq8j6uxqzl6vvxj43gyg3wo8dnej9in9r7bc1smpuo6651bmf1oy5iexnidfneuwa7285hc7td505oo62pase2kdem9oudz856ks3w0h474dc42j5y5hu5r1aa2qhjimae4bl4e8s7a2byg7m004bm36xzj2hvkbcfh5f83n5qxok4xd3jyrrllv1wz9pck8j3mgiharjk86i94udaace3jo8szew2725o59ifu2pk0p04bqnn5lgvtddorz6vyhxgms1hquiwrvsbk6vja9odrr4z7o11c3mepqrkmalchv9uamce43jw3mj4lxcd98m9imxfh6nrket2724mm3rn1j1b9f6wug4dh57jhzfv6klwlvtpod3ri6b == \f\x\i\j\h\4\s\p\h\m\m\x\d\g\7\w\9\5\0\o\f\q\r\g\u\n\6\b\9\d\8\y\b\8\5\z\j\z\v\2\v\r\f\2\x\d\p\j\a\9\3\s\z\j\y\f\j\f\4\n\j\j\i\3\m\o\k\8\p\i\y\z\a\w\y\h\4\n\w\c\v\9\3\v\z\b\0\i\j\q\5\r\z\o\s\f\7\m\5\i\a\v\z\l\n\5\0\v\b\8\8\n\b\k\d\1\a\m\i\2\z\b\a\f\k\r\a\i\m\a\q\6\u\2\d\b\q\8\j\6\u\x\q\z\l\6\v\v\x\j\4\3\g\y\g\3\w\o\8\d\n\e\j\9\i\n\9\r\7\b\c\1\s\m\p\u\o\6\6\5\1\b\m\f\1\o\y\5\i\e\x\n\i\d\f\n\e\u\w\a\7\2\8\5\h\c\7\t\d\5\0\5\o\o\6\2\p\a\s\e\2\k\d\e\m\9\o\u\d\z\8\5\6\k\s\3\w\0\h\4\7\4\d\c\4\2\j\5\y\5\h\u\5\r\1\a\a\2\q\h\j\i\m\a\e\4\b\l\4\e\8\s\7\a\2\b\y\g\7\m\0\0\4\b\m\3\6\x\z\j\2\h\v\k\b\c\f\h\5\f\8\3\n\5\q\x\o\k\4\x\d\3\j\y\r\r\l\l\v\1\w\z\9\p\c\k\8\j\3\m\g\i\h\a\r\j\k\8\6\i\9\4\u\d\a\a\c\e\3\j\o\8\s\z\e\w\2\7\2\5\o\5\9\i\f\u\2\p\k\0\p\0\4\b\q\n\n\5\l\g\v\t\d\d\o\r\z\6\v\y\h\x\g\m\s\1\h\q\u\i\w\r\v\s\b\k\6\v\j\a\9\o\d\r\r\4\z\7\o\1\1\c\3\m\e\p\q\r\k\m\a\l\c\h\v\9\u\a\m\c\e\4\3\j\w\3\m\j\4\l\x\c\d\9\8\m\9\i\m\x\f\h\6\n\r\k\e\t\2\7\2\4\m\m\3\r\n\1\j\1\b\9\f\6\w\u\g\4\d\h\5\7\j\h\z\f\v\6\k\l\w\l\v\t\p\o\d\3\r\i\6\b ]] 00:06:52.899 00:06:52.899 real 0m1.643s 00:06:52.899 user 0m0.874s 00:06:52.899 sys 0m0.440s 00:06:52.899 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.899 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:52.899 ************************************ 00:06:52.899 END TEST dd_flag_nofollow_forced_aio 00:06:52.899 ************************************ 00:06:52.899 13:47:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:52.899 13:47:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.899 13:47:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.899 13:47:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:52.899 ************************************ 00:06:52.899 START TEST dd_flag_noatime_forced_aio 00:06:52.899 ************************************ 00:06:52.899 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:06:52.899 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:52.899 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:52.899 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:52.899 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:52.899 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:52.899 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:52.899 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733924865 00:06:52.899 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:52.899 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733924865 00:06:52.899 13:47:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:54.279 13:47:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:54.279 [2024-12-11 13:47:46.977810] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:54.279 [2024-12-11 13:47:46.977904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61759 ] 00:06:54.279 [2024-12-11 13:47:47.138863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.279 [2024-12-11 13:47:47.204553] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.279 [2024-12-11 13:47:47.259830] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.279  [2024-12-11T13:47:47.585Z] Copying: 512/512 [B] (average 500 kBps) 00:06:54.538 00:06:54.538 13:47:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:54.538 13:47:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733924865 )) 00:06:54.538 13:47:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:54.538 13:47:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733924865 )) 00:06:54.538 13:47:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:54.796 [2024-12-11 13:47:47.585905] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:54.796 [2024-12-11 13:47:47.586026] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61770 ] 00:06:54.796 [2024-12-11 13:47:47.731839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.796 [2024-12-11 13:47:47.780309] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.796 [2024-12-11 13:47:47.834003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.055  [2024-12-11T13:47:48.102Z] Copying: 512/512 [B] (average 500 kBps) 00:06:55.055 00:06:55.055 13:47:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:55.055 13:47:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733924867 )) 00:06:55.055 00:06:55.055 real 0m2.194s 00:06:55.055 user 0m0.624s 00:06:55.055 sys 0m0.325s 00:06:55.055 13:47:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.055 ************************************ 00:06:55.055 END TEST dd_flag_noatime_forced_aio 00:06:55.055 13:47:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:55.055 ************************************ 00:06:55.314 13:47:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:55.314 13:47:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.314 13:47:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.314 13:47:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:55.314 ************************************ 00:06:55.314 START TEST dd_flags_misc_forced_aio 00:06:55.314 ************************************ 00:06:55.314 13:47:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:06:55.314 13:47:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:55.314 13:47:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:55.314 13:47:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:55.314 13:47:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:55.314 13:47:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:55.314 13:47:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:55.314 13:47:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:55.314 13:47:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:55.314 13:47:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:55.314 [2024-12-11 13:47:48.211056] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:55.314 [2024-12-11 13:47:48.211159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61797 ] 00:06:55.314 [2024-12-11 13:47:48.358408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.573 [2024-12-11 13:47:48.413362] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.573 [2024-12-11 13:47:48.471139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.573  [2024-12-11T13:47:48.878Z] Copying: 512/512 [B] (average 500 kBps) 00:06:55.831 00:06:55.831 13:47:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f2yb5m4hisinqkpesd7mya8gljtul5rlgbbialvwazazs6esffsq6rr6f7vncmxfa2mpoq9m9qiawotnit4uorvqlbdroz7benwhenkc1t77v460hlhhpqy888lki55t5ssdf8nh3w7e6us9b094rjbowrgsdfnelku153x6030bkbtwd79c58tgx1ancn7msc4t0ce8dg0pkxf583y4xpl11lpg8f6nvzo0yerp36isx8pd0etv1visuq069uab38j04zchyl4aqxjjownw3n1zgtca281w5g9bmond5fcfpuiyscamj1kasm3lx1wy4kxrpmz4jwcxqyip6lnx4soviw0785g5ond2qljtvj1cpw4j3dksyfx9il6bpyb7gzim1hjqwofnds2e9vtoxjqfervydenmr2l0fphtb6l12qib4zbsrai6furholovy8a3kp2aboo6mc28ns80sxcj5kn1w3li0xctxtd2bdm25edyfvtiagvb6fx4mjfq == \f\2\y\b\5\m\4\h\i\s\i\n\q\k\p\e\s\d\7\m\y\a\8\g\l\j\t\u\l\5\r\l\g\b\b\i\a\l\v\w\a\z\a\z\s\6\e\s\f\f\s\q\6\r\r\6\f\7\v\n\c\m\x\f\a\2\m\p\o\q\9\m\9\q\i\a\w\o\t\n\i\t\4\u\o\r\v\q\l\b\d\r\o\z\7\b\e\n\w\h\e\n\k\c\1\t\7\7\v\4\6\0\h\l\h\h\p\q\y\8\8\8\l\k\i\5\5\t\5\s\s\d\f\8\n\h\3\w\7\e\6\u\s\9\b\0\9\4\r\j\b\o\w\r\g\s\d\f\n\e\l\k\u\1\5\3\x\6\0\3\0\b\k\b\t\w\d\7\9\c\5\8\t\g\x\1\a\n\c\n\7\m\s\c\4\t\0\c\e\8\d\g\0\p\k\x\f\5\8\3\y\4\x\p\l\1\1\l\p\g\8\f\6\n\v\z\o\0\y\e\r\p\3\6\i\s\x\8\p\d\0\e\t\v\1\v\i\s\u\q\0\6\9\u\a\b\3\8\j\0\4\z\c\h\y\l\4\a\q\x\j\j\o\w\n\w\3\n\1\z\g\t\c\a\2\8\1\w\5\g\9\b\m\o\n\d\5\f\c\f\p\u\i\y\s\c\a\m\j\1\k\a\s\m\3\l\x\1\w\y\4\k\x\r\p\m\z\4\j\w\c\x\q\y\i\p\6\l\n\x\4\s\o\v\i\w\0\7\8\5\g\5\o\n\d\2\q\l\j\t\v\j\1\c\p\w\4\j\3\d\k\s\y\f\x\9\i\l\6\b\p\y\b\7\g\z\i\m\1\h\j\q\w\o\f\n\d\s\2\e\9\v\t\o\x\j\q\f\e\r\v\y\d\e\n\m\r\2\l\0\f\p\h\t\b\6\l\1\2\q\i\b\4\z\b\s\r\a\i\6\f\u\r\h\o\l\o\v\y\8\a\3\k\p\2\a\b\o\o\6\m\c\2\8\n\s\8\0\s\x\c\j\5\k\n\1\w\3\l\i\0\x\c\t\x\t\d\2\b\d\m\2\5\e\d\y\f\v\t\i\a\g\v\b\6\f\x\4\m\j\f\q ]] 00:06:55.831 13:47:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:55.831 13:47:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:55.831 [2024-12-11 13:47:48.788807] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:55.831 [2024-12-11 13:47:48.788911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61804 ] 00:06:56.090 [2024-12-11 13:47:48.936392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.090 [2024-12-11 13:47:48.991675] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.090 [2024-12-11 13:47:49.046902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.090  [2024-12-11T13:47:49.395Z] Copying: 512/512 [B] (average 500 kBps) 00:06:56.348 00:06:56.348 13:47:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f2yb5m4hisinqkpesd7mya8gljtul5rlgbbialvwazazs6esffsq6rr6f7vncmxfa2mpoq9m9qiawotnit4uorvqlbdroz7benwhenkc1t77v460hlhhpqy888lki55t5ssdf8nh3w7e6us9b094rjbowrgsdfnelku153x6030bkbtwd79c58tgx1ancn7msc4t0ce8dg0pkxf583y4xpl11lpg8f6nvzo0yerp36isx8pd0etv1visuq069uab38j04zchyl4aqxjjownw3n1zgtca281w5g9bmond5fcfpuiyscamj1kasm3lx1wy4kxrpmz4jwcxqyip6lnx4soviw0785g5ond2qljtvj1cpw4j3dksyfx9il6bpyb7gzim1hjqwofnds2e9vtoxjqfervydenmr2l0fphtb6l12qib4zbsrai6furholovy8a3kp2aboo6mc28ns80sxcj5kn1w3li0xctxtd2bdm25edyfvtiagvb6fx4mjfq == \f\2\y\b\5\m\4\h\i\s\i\n\q\k\p\e\s\d\7\m\y\a\8\g\l\j\t\u\l\5\r\l\g\b\b\i\a\l\v\w\a\z\a\z\s\6\e\s\f\f\s\q\6\r\r\6\f\7\v\n\c\m\x\f\a\2\m\p\o\q\9\m\9\q\i\a\w\o\t\n\i\t\4\u\o\r\v\q\l\b\d\r\o\z\7\b\e\n\w\h\e\n\k\c\1\t\7\7\v\4\6\0\h\l\h\h\p\q\y\8\8\8\l\k\i\5\5\t\5\s\s\d\f\8\n\h\3\w\7\e\6\u\s\9\b\0\9\4\r\j\b\o\w\r\g\s\d\f\n\e\l\k\u\1\5\3\x\6\0\3\0\b\k\b\t\w\d\7\9\c\5\8\t\g\x\1\a\n\c\n\7\m\s\c\4\t\0\c\e\8\d\g\0\p\k\x\f\5\8\3\y\4\x\p\l\1\1\l\p\g\8\f\6\n\v\z\o\0\y\e\r\p\3\6\i\s\x\8\p\d\0\e\t\v\1\v\i\s\u\q\0\6\9\u\a\b\3\8\j\0\4\z\c\h\y\l\4\a\q\x\j\j\o\w\n\w\3\n\1\z\g\t\c\a\2\8\1\w\5\g\9\b\m\o\n\d\5\f\c\f\p\u\i\y\s\c\a\m\j\1\k\a\s\m\3\l\x\1\w\y\4\k\x\r\p\m\z\4\j\w\c\x\q\y\i\p\6\l\n\x\4\s\o\v\i\w\0\7\8\5\g\5\o\n\d\2\q\l\j\t\v\j\1\c\p\w\4\j\3\d\k\s\y\f\x\9\i\l\6\b\p\y\b\7\g\z\i\m\1\h\j\q\w\o\f\n\d\s\2\e\9\v\t\o\x\j\q\f\e\r\v\y\d\e\n\m\r\2\l\0\f\p\h\t\b\6\l\1\2\q\i\b\4\z\b\s\r\a\i\6\f\u\r\h\o\l\o\v\y\8\a\3\k\p\2\a\b\o\o\6\m\c\2\8\n\s\8\0\s\x\c\j\5\k\n\1\w\3\l\i\0\x\c\t\x\t\d\2\b\d\m\2\5\e\d\y\f\v\t\i\a\g\v\b\6\f\x\4\m\j\f\q ]] 00:06:56.348 13:47:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:56.348 13:47:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:56.348 [2024-12-11 13:47:49.356776] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:56.349 [2024-12-11 13:47:49.356874] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61816 ] 00:06:56.608 [2024-12-11 13:47:49.505062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.608 [2024-12-11 13:47:49.555683] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.608 [2024-12-11 13:47:49.611291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.608  [2024-12-11T13:47:49.913Z] Copying: 512/512 [B] (average 500 kBps) 00:06:56.866 00:06:56.867 13:47:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f2yb5m4hisinqkpesd7mya8gljtul5rlgbbialvwazazs6esffsq6rr6f7vncmxfa2mpoq9m9qiawotnit4uorvqlbdroz7benwhenkc1t77v460hlhhpqy888lki55t5ssdf8nh3w7e6us9b094rjbowrgsdfnelku153x6030bkbtwd79c58tgx1ancn7msc4t0ce8dg0pkxf583y4xpl11lpg8f6nvzo0yerp36isx8pd0etv1visuq069uab38j04zchyl4aqxjjownw3n1zgtca281w5g9bmond5fcfpuiyscamj1kasm3lx1wy4kxrpmz4jwcxqyip6lnx4soviw0785g5ond2qljtvj1cpw4j3dksyfx9il6bpyb7gzim1hjqwofnds2e9vtoxjqfervydenmr2l0fphtb6l12qib4zbsrai6furholovy8a3kp2aboo6mc28ns80sxcj5kn1w3li0xctxtd2bdm25edyfvtiagvb6fx4mjfq == \f\2\y\b\5\m\4\h\i\s\i\n\q\k\p\e\s\d\7\m\y\a\8\g\l\j\t\u\l\5\r\l\g\b\b\i\a\l\v\w\a\z\a\z\s\6\e\s\f\f\s\q\6\r\r\6\f\7\v\n\c\m\x\f\a\2\m\p\o\q\9\m\9\q\i\a\w\o\t\n\i\t\4\u\o\r\v\q\l\b\d\r\o\z\7\b\e\n\w\h\e\n\k\c\1\t\7\7\v\4\6\0\h\l\h\h\p\q\y\8\8\8\l\k\i\5\5\t\5\s\s\d\f\8\n\h\3\w\7\e\6\u\s\9\b\0\9\4\r\j\b\o\w\r\g\s\d\f\n\e\l\k\u\1\5\3\x\6\0\3\0\b\k\b\t\w\d\7\9\c\5\8\t\g\x\1\a\n\c\n\7\m\s\c\4\t\0\c\e\8\d\g\0\p\k\x\f\5\8\3\y\4\x\p\l\1\1\l\p\g\8\f\6\n\v\z\o\0\y\e\r\p\3\6\i\s\x\8\p\d\0\e\t\v\1\v\i\s\u\q\0\6\9\u\a\b\3\8\j\0\4\z\c\h\y\l\4\a\q\x\j\j\o\w\n\w\3\n\1\z\g\t\c\a\2\8\1\w\5\g\9\b\m\o\n\d\5\f\c\f\p\u\i\y\s\c\a\m\j\1\k\a\s\m\3\l\x\1\w\y\4\k\x\r\p\m\z\4\j\w\c\x\q\y\i\p\6\l\n\x\4\s\o\v\i\w\0\7\8\5\g\5\o\n\d\2\q\l\j\t\v\j\1\c\p\w\4\j\3\d\k\s\y\f\x\9\i\l\6\b\p\y\b\7\g\z\i\m\1\h\j\q\w\o\f\n\d\s\2\e\9\v\t\o\x\j\q\f\e\r\v\y\d\e\n\m\r\2\l\0\f\p\h\t\b\6\l\1\2\q\i\b\4\z\b\s\r\a\i\6\f\u\r\h\o\l\o\v\y\8\a\3\k\p\2\a\b\o\o\6\m\c\2\8\n\s\8\0\s\x\c\j\5\k\n\1\w\3\l\i\0\x\c\t\x\t\d\2\b\d\m\2\5\e\d\y\f\v\t\i\a\g\v\b\6\f\x\4\m\j\f\q ]] 00:06:56.867 13:47:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:56.867 13:47:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:57.125 [2024-12-11 13:47:49.933022] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:57.125 [2024-12-11 13:47:49.933118] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61819 ] 00:06:57.125 [2024-12-11 13:47:50.082617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.125 [2024-12-11 13:47:50.137624] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.383 [2024-12-11 13:47:50.193426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.383  [2024-12-11T13:47:50.689Z] Copying: 512/512 [B] (average 166 kBps) 00:06:57.642 00:06:57.642 13:47:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f2yb5m4hisinqkpesd7mya8gljtul5rlgbbialvwazazs6esffsq6rr6f7vncmxfa2mpoq9m9qiawotnit4uorvqlbdroz7benwhenkc1t77v460hlhhpqy888lki55t5ssdf8nh3w7e6us9b094rjbowrgsdfnelku153x6030bkbtwd79c58tgx1ancn7msc4t0ce8dg0pkxf583y4xpl11lpg8f6nvzo0yerp36isx8pd0etv1visuq069uab38j04zchyl4aqxjjownw3n1zgtca281w5g9bmond5fcfpuiyscamj1kasm3lx1wy4kxrpmz4jwcxqyip6lnx4soviw0785g5ond2qljtvj1cpw4j3dksyfx9il6bpyb7gzim1hjqwofnds2e9vtoxjqfervydenmr2l0fphtb6l12qib4zbsrai6furholovy8a3kp2aboo6mc28ns80sxcj5kn1w3li0xctxtd2bdm25edyfvtiagvb6fx4mjfq == \f\2\y\b\5\m\4\h\i\s\i\n\q\k\p\e\s\d\7\m\y\a\8\g\l\j\t\u\l\5\r\l\g\b\b\i\a\l\v\w\a\z\a\z\s\6\e\s\f\f\s\q\6\r\r\6\f\7\v\n\c\m\x\f\a\2\m\p\o\q\9\m\9\q\i\a\w\o\t\n\i\t\4\u\o\r\v\q\l\b\d\r\o\z\7\b\e\n\w\h\e\n\k\c\1\t\7\7\v\4\6\0\h\l\h\h\p\q\y\8\8\8\l\k\i\5\5\t\5\s\s\d\f\8\n\h\3\w\7\e\6\u\s\9\b\0\9\4\r\j\b\o\w\r\g\s\d\f\n\e\l\k\u\1\5\3\x\6\0\3\0\b\k\b\t\w\d\7\9\c\5\8\t\g\x\1\a\n\c\n\7\m\s\c\4\t\0\c\e\8\d\g\0\p\k\x\f\5\8\3\y\4\x\p\l\1\1\l\p\g\8\f\6\n\v\z\o\0\y\e\r\p\3\6\i\s\x\8\p\d\0\e\t\v\1\v\i\s\u\q\0\6\9\u\a\b\3\8\j\0\4\z\c\h\y\l\4\a\q\x\j\j\o\w\n\w\3\n\1\z\g\t\c\a\2\8\1\w\5\g\9\b\m\o\n\d\5\f\c\f\p\u\i\y\s\c\a\m\j\1\k\a\s\m\3\l\x\1\w\y\4\k\x\r\p\m\z\4\j\w\c\x\q\y\i\p\6\l\n\x\4\s\o\v\i\w\0\7\8\5\g\5\o\n\d\2\q\l\j\t\v\j\1\c\p\w\4\j\3\d\k\s\y\f\x\9\i\l\6\b\p\y\b\7\g\z\i\m\1\h\j\q\w\o\f\n\d\s\2\e\9\v\t\o\x\j\q\f\e\r\v\y\d\e\n\m\r\2\l\0\f\p\h\t\b\6\l\1\2\q\i\b\4\z\b\s\r\a\i\6\f\u\r\h\o\l\o\v\y\8\a\3\k\p\2\a\b\o\o\6\m\c\2\8\n\s\8\0\s\x\c\j\5\k\n\1\w\3\l\i\0\x\c\t\x\t\d\2\b\d\m\2\5\e\d\y\f\v\t\i\a\g\v\b\6\f\x\4\m\j\f\q ]] 00:06:57.642 13:47:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:57.642 13:47:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:57.642 13:47:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:57.642 13:47:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:57.642 13:47:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:57.642 13:47:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:57.642 [2024-12-11 13:47:50.526787] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:57.642 [2024-12-11 13:47:50.526887] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61832 ] 00:06:57.642 [2024-12-11 13:47:50.675091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.901 [2024-12-11 13:47:50.736600] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.901 [2024-12-11 13:47:50.791420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.901  [2024-12-11T13:47:51.206Z] Copying: 512/512 [B] (average 500 kBps) 00:06:58.159 00:06:58.159 13:47:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jpqedsa1f4bdrhl0kqzwfz7dno0ls5nsl5i63fxvitf2mi6eo8z19ty8o5ayw08a3g2p563nvcmyh13q9cgm1suwaf4dopd1yoscba9a5wjzwqgh2hyibeebs9j7xtyl21wmpqgy3r0fm86eu5y76a2gjcs6ithx0ams0w25edj9ds611huo2bt79tewvyinq1gkyn3hjebnc3jepdqhip4prmtbjkvsybbpkhpcibm0r5ix4siua9rmwreugs5unbfnhnj2glh9wg8qjlb82cf9ovx8toq1x773ffk6ge9fptjjo0ezs8qzemeeqgem12uahh6n50h0zgz0yx2lqnehrhgevor0i3y9gfdcmqnp6r8rzwfnrm8j1v72zt8oywqtwofqkrahnwddewjl0tiavn88iwhjt9vgxd8u7p9rx9gcwdqog73hna5mk3j4a9qs1l4bcsn5q9y8mrgsfbc8fwyy1su7pj4nvn14fh04glsotblnkhfq5xlgyukt == \j\p\q\e\d\s\a\1\f\4\b\d\r\h\l\0\k\q\z\w\f\z\7\d\n\o\0\l\s\5\n\s\l\5\i\6\3\f\x\v\i\t\f\2\m\i\6\e\o\8\z\1\9\t\y\8\o\5\a\y\w\0\8\a\3\g\2\p\5\6\3\n\v\c\m\y\h\1\3\q\9\c\g\m\1\s\u\w\a\f\4\d\o\p\d\1\y\o\s\c\b\a\9\a\5\w\j\z\w\q\g\h\2\h\y\i\b\e\e\b\s\9\j\7\x\t\y\l\2\1\w\m\p\q\g\y\3\r\0\f\m\8\6\e\u\5\y\7\6\a\2\g\j\c\s\6\i\t\h\x\0\a\m\s\0\w\2\5\e\d\j\9\d\s\6\1\1\h\u\o\2\b\t\7\9\t\e\w\v\y\i\n\q\1\g\k\y\n\3\h\j\e\b\n\c\3\j\e\p\d\q\h\i\p\4\p\r\m\t\b\j\k\v\s\y\b\b\p\k\h\p\c\i\b\m\0\r\5\i\x\4\s\i\u\a\9\r\m\w\r\e\u\g\s\5\u\n\b\f\n\h\n\j\2\g\l\h\9\w\g\8\q\j\l\b\8\2\c\f\9\o\v\x\8\t\o\q\1\x\7\7\3\f\f\k\6\g\e\9\f\p\t\j\j\o\0\e\z\s\8\q\z\e\m\e\e\q\g\e\m\1\2\u\a\h\h\6\n\5\0\h\0\z\g\z\0\y\x\2\l\q\n\e\h\r\h\g\e\v\o\r\0\i\3\y\9\g\f\d\c\m\q\n\p\6\r\8\r\z\w\f\n\r\m\8\j\1\v\7\2\z\t\8\o\y\w\q\t\w\o\f\q\k\r\a\h\n\w\d\d\e\w\j\l\0\t\i\a\v\n\8\8\i\w\h\j\t\9\v\g\x\d\8\u\7\p\9\r\x\9\g\c\w\d\q\o\g\7\3\h\n\a\5\m\k\3\j\4\a\9\q\s\1\l\4\b\c\s\n\5\q\9\y\8\m\r\g\s\f\b\c\8\f\w\y\y\1\s\u\7\p\j\4\n\v\n\1\4\f\h\0\4\g\l\s\o\t\b\l\n\k\h\f\q\5\x\l\g\y\u\k\t ]] 00:06:58.159 13:47:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:58.159 13:47:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:58.159 [2024-12-11 13:47:51.104692] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:58.159 [2024-12-11 13:47:51.104818] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61840 ] 00:06:58.418 [2024-12-11 13:47:51.252189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.418 [2024-12-11 13:47:51.309462] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.418 [2024-12-11 13:47:51.365424] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.418  [2024-12-11T13:47:51.723Z] Copying: 512/512 [B] (average 500 kBps) 00:06:58.676 00:06:58.676 13:47:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jpqedsa1f4bdrhl0kqzwfz7dno0ls5nsl5i63fxvitf2mi6eo8z19ty8o5ayw08a3g2p563nvcmyh13q9cgm1suwaf4dopd1yoscba9a5wjzwqgh2hyibeebs9j7xtyl21wmpqgy3r0fm86eu5y76a2gjcs6ithx0ams0w25edj9ds611huo2bt79tewvyinq1gkyn3hjebnc3jepdqhip4prmtbjkvsybbpkhpcibm0r5ix4siua9rmwreugs5unbfnhnj2glh9wg8qjlb82cf9ovx8toq1x773ffk6ge9fptjjo0ezs8qzemeeqgem12uahh6n50h0zgz0yx2lqnehrhgevor0i3y9gfdcmqnp6r8rzwfnrm8j1v72zt8oywqtwofqkrahnwddewjl0tiavn88iwhjt9vgxd8u7p9rx9gcwdqog73hna5mk3j4a9qs1l4bcsn5q9y8mrgsfbc8fwyy1su7pj4nvn14fh04glsotblnkhfq5xlgyukt == \j\p\q\e\d\s\a\1\f\4\b\d\r\h\l\0\k\q\z\w\f\z\7\d\n\o\0\l\s\5\n\s\l\5\i\6\3\f\x\v\i\t\f\2\m\i\6\e\o\8\z\1\9\t\y\8\o\5\a\y\w\0\8\a\3\g\2\p\5\6\3\n\v\c\m\y\h\1\3\q\9\c\g\m\1\s\u\w\a\f\4\d\o\p\d\1\y\o\s\c\b\a\9\a\5\w\j\z\w\q\g\h\2\h\y\i\b\e\e\b\s\9\j\7\x\t\y\l\2\1\w\m\p\q\g\y\3\r\0\f\m\8\6\e\u\5\y\7\6\a\2\g\j\c\s\6\i\t\h\x\0\a\m\s\0\w\2\5\e\d\j\9\d\s\6\1\1\h\u\o\2\b\t\7\9\t\e\w\v\y\i\n\q\1\g\k\y\n\3\h\j\e\b\n\c\3\j\e\p\d\q\h\i\p\4\p\r\m\t\b\j\k\v\s\y\b\b\p\k\h\p\c\i\b\m\0\r\5\i\x\4\s\i\u\a\9\r\m\w\r\e\u\g\s\5\u\n\b\f\n\h\n\j\2\g\l\h\9\w\g\8\q\j\l\b\8\2\c\f\9\o\v\x\8\t\o\q\1\x\7\7\3\f\f\k\6\g\e\9\f\p\t\j\j\o\0\e\z\s\8\q\z\e\m\e\e\q\g\e\m\1\2\u\a\h\h\6\n\5\0\h\0\z\g\z\0\y\x\2\l\q\n\e\h\r\h\g\e\v\o\r\0\i\3\y\9\g\f\d\c\m\q\n\p\6\r\8\r\z\w\f\n\r\m\8\j\1\v\7\2\z\t\8\o\y\w\q\t\w\o\f\q\k\r\a\h\n\w\d\d\e\w\j\l\0\t\i\a\v\n\8\8\i\w\h\j\t\9\v\g\x\d\8\u\7\p\9\r\x\9\g\c\w\d\q\o\g\7\3\h\n\a\5\m\k\3\j\4\a\9\q\s\1\l\4\b\c\s\n\5\q\9\y\8\m\r\g\s\f\b\c\8\f\w\y\y\1\s\u\7\p\j\4\n\v\n\1\4\f\h\0\4\g\l\s\o\t\b\l\n\k\h\f\q\5\x\l\g\y\u\k\t ]] 00:06:58.676 13:47:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:58.676 13:47:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:58.676 [2024-12-11 13:47:51.662942] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:58.676 [2024-12-11 13:47:51.663040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61847 ] 00:06:58.934 [2024-12-11 13:47:51.810282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.934 [2024-12-11 13:47:51.869437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.934 [2024-12-11 13:47:51.923732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.934  [2024-12-11T13:47:52.240Z] Copying: 512/512 [B] (average 500 kBps) 00:06:59.193 00:06:59.193 13:47:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jpqedsa1f4bdrhl0kqzwfz7dno0ls5nsl5i63fxvitf2mi6eo8z19ty8o5ayw08a3g2p563nvcmyh13q9cgm1suwaf4dopd1yoscba9a5wjzwqgh2hyibeebs9j7xtyl21wmpqgy3r0fm86eu5y76a2gjcs6ithx0ams0w25edj9ds611huo2bt79tewvyinq1gkyn3hjebnc3jepdqhip4prmtbjkvsybbpkhpcibm0r5ix4siua9rmwreugs5unbfnhnj2glh9wg8qjlb82cf9ovx8toq1x773ffk6ge9fptjjo0ezs8qzemeeqgem12uahh6n50h0zgz0yx2lqnehrhgevor0i3y9gfdcmqnp6r8rzwfnrm8j1v72zt8oywqtwofqkrahnwddewjl0tiavn88iwhjt9vgxd8u7p9rx9gcwdqog73hna5mk3j4a9qs1l4bcsn5q9y8mrgsfbc8fwyy1su7pj4nvn14fh04glsotblnkhfq5xlgyukt == \j\p\q\e\d\s\a\1\f\4\b\d\r\h\l\0\k\q\z\w\f\z\7\d\n\o\0\l\s\5\n\s\l\5\i\6\3\f\x\v\i\t\f\2\m\i\6\e\o\8\z\1\9\t\y\8\o\5\a\y\w\0\8\a\3\g\2\p\5\6\3\n\v\c\m\y\h\1\3\q\9\c\g\m\1\s\u\w\a\f\4\d\o\p\d\1\y\o\s\c\b\a\9\a\5\w\j\z\w\q\g\h\2\h\y\i\b\e\e\b\s\9\j\7\x\t\y\l\2\1\w\m\p\q\g\y\3\r\0\f\m\8\6\e\u\5\y\7\6\a\2\g\j\c\s\6\i\t\h\x\0\a\m\s\0\w\2\5\e\d\j\9\d\s\6\1\1\h\u\o\2\b\t\7\9\t\e\w\v\y\i\n\q\1\g\k\y\n\3\h\j\e\b\n\c\3\j\e\p\d\q\h\i\p\4\p\r\m\t\b\j\k\v\s\y\b\b\p\k\h\p\c\i\b\m\0\r\5\i\x\4\s\i\u\a\9\r\m\w\r\e\u\g\s\5\u\n\b\f\n\h\n\j\2\g\l\h\9\w\g\8\q\j\l\b\8\2\c\f\9\o\v\x\8\t\o\q\1\x\7\7\3\f\f\k\6\g\e\9\f\p\t\j\j\o\0\e\z\s\8\q\z\e\m\e\e\q\g\e\m\1\2\u\a\h\h\6\n\5\0\h\0\z\g\z\0\y\x\2\l\q\n\e\h\r\h\g\e\v\o\r\0\i\3\y\9\g\f\d\c\m\q\n\p\6\r\8\r\z\w\f\n\r\m\8\j\1\v\7\2\z\t\8\o\y\w\q\t\w\o\f\q\k\r\a\h\n\w\d\d\e\w\j\l\0\t\i\a\v\n\8\8\i\w\h\j\t\9\v\g\x\d\8\u\7\p\9\r\x\9\g\c\w\d\q\o\g\7\3\h\n\a\5\m\k\3\j\4\a\9\q\s\1\l\4\b\c\s\n\5\q\9\y\8\m\r\g\s\f\b\c\8\f\w\y\y\1\s\u\7\p\j\4\n\v\n\1\4\f\h\0\4\g\l\s\o\t\b\l\n\k\h\f\q\5\x\l\g\y\u\k\t ]] 00:06:59.193 13:47:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:59.193 13:47:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:59.193 [2024-12-11 13:47:52.227326] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:59.193 [2024-12-11 13:47:52.227427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61855 ] 00:06:59.451 [2024-12-11 13:47:52.374385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.451 [2024-12-11 13:47:52.442804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.709 [2024-12-11 13:47:52.498344] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.709  [2024-12-11T13:47:53.016Z] Copying: 512/512 [B] (average 21 kBps) 00:06:59.969 00:06:59.970 13:47:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jpqedsa1f4bdrhl0kqzwfz7dno0ls5nsl5i63fxvitf2mi6eo8z19ty8o5ayw08a3g2p563nvcmyh13q9cgm1suwaf4dopd1yoscba9a5wjzwqgh2hyibeebs9j7xtyl21wmpqgy3r0fm86eu5y76a2gjcs6ithx0ams0w25edj9ds611huo2bt79tewvyinq1gkyn3hjebnc3jepdqhip4prmtbjkvsybbpkhpcibm0r5ix4siua9rmwreugs5unbfnhnj2glh9wg8qjlb82cf9ovx8toq1x773ffk6ge9fptjjo0ezs8qzemeeqgem12uahh6n50h0zgz0yx2lqnehrhgevor0i3y9gfdcmqnp6r8rzwfnrm8j1v72zt8oywqtwofqkrahnwddewjl0tiavn88iwhjt9vgxd8u7p9rx9gcwdqog73hna5mk3j4a9qs1l4bcsn5q9y8mrgsfbc8fwyy1su7pj4nvn14fh04glsotblnkhfq5xlgyukt == \j\p\q\e\d\s\a\1\f\4\b\d\r\h\l\0\k\q\z\w\f\z\7\d\n\o\0\l\s\5\n\s\l\5\i\6\3\f\x\v\i\t\f\2\m\i\6\e\o\8\z\1\9\t\y\8\o\5\a\y\w\0\8\a\3\g\2\p\5\6\3\n\v\c\m\y\h\1\3\q\9\c\g\m\1\s\u\w\a\f\4\d\o\p\d\1\y\o\s\c\b\a\9\a\5\w\j\z\w\q\g\h\2\h\y\i\b\e\e\b\s\9\j\7\x\t\y\l\2\1\w\m\p\q\g\y\3\r\0\f\m\8\6\e\u\5\y\7\6\a\2\g\j\c\s\6\i\t\h\x\0\a\m\s\0\w\2\5\e\d\j\9\d\s\6\1\1\h\u\o\2\b\t\7\9\t\e\w\v\y\i\n\q\1\g\k\y\n\3\h\j\e\b\n\c\3\j\e\p\d\q\h\i\p\4\p\r\m\t\b\j\k\v\s\y\b\b\p\k\h\p\c\i\b\m\0\r\5\i\x\4\s\i\u\a\9\r\m\w\r\e\u\g\s\5\u\n\b\f\n\h\n\j\2\g\l\h\9\w\g\8\q\j\l\b\8\2\c\f\9\o\v\x\8\t\o\q\1\x\7\7\3\f\f\k\6\g\e\9\f\p\t\j\j\o\0\e\z\s\8\q\z\e\m\e\e\q\g\e\m\1\2\u\a\h\h\6\n\5\0\h\0\z\g\z\0\y\x\2\l\q\n\e\h\r\h\g\e\v\o\r\0\i\3\y\9\g\f\d\c\m\q\n\p\6\r\8\r\z\w\f\n\r\m\8\j\1\v\7\2\z\t\8\o\y\w\q\t\w\o\f\q\k\r\a\h\n\w\d\d\e\w\j\l\0\t\i\a\v\n\8\8\i\w\h\j\t\9\v\g\x\d\8\u\7\p\9\r\x\9\g\c\w\d\q\o\g\7\3\h\n\a\5\m\k\3\j\4\a\9\q\s\1\l\4\b\c\s\n\5\q\9\y\8\m\r\g\s\f\b\c\8\f\w\y\y\1\s\u\7\p\j\4\n\v\n\1\4\f\h\0\4\g\l\s\o\t\b\l\n\k\h\f\q\5\x\l\g\y\u\k\t ]] 00:06:59.970 00:06:59.970 real 0m4.622s 00:06:59.970 user 0m2.448s 00:06:59.970 sys 0m1.144s 00:06:59.970 13:47:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.970 ************************************ 00:06:59.970 END TEST dd_flags_misc_forced_aio 00:06:59.970 13:47:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:59.970 ************************************ 00:06:59.970 13:47:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:59.970 13:47:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:59.970 13:47:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:59.970 00:06:59.970 real 0m20.833s 00:06:59.970 user 0m10.057s 00:06:59.970 sys 0m6.762s 00:06:59.970 13:47:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.970 13:47:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:59.970 ************************************ 00:06:59.970 END TEST spdk_dd_posix 00:06:59.970 ************************************ 00:06:59.970 13:47:52 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:59.970 13:47:52 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.970 13:47:52 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.970 13:47:52 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:59.970 ************************************ 00:06:59.970 START TEST spdk_dd_malloc 00:06:59.970 ************************************ 00:06:59.970 13:47:52 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:59.970 * Looking for test storage... 00:06:59.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:59.970 13:47:52 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:59.970 13:47:52 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:59.970 13:47:52 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:00.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.229 --rc genhtml_branch_coverage=1 00:07:00.229 --rc genhtml_function_coverage=1 00:07:00.229 --rc genhtml_legend=1 00:07:00.229 --rc geninfo_all_blocks=1 00:07:00.229 --rc geninfo_unexecuted_blocks=1 00:07:00.229 00:07:00.229 ' 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:00.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.229 --rc genhtml_branch_coverage=1 00:07:00.229 --rc genhtml_function_coverage=1 00:07:00.229 --rc genhtml_legend=1 00:07:00.229 --rc geninfo_all_blocks=1 00:07:00.229 --rc geninfo_unexecuted_blocks=1 00:07:00.229 00:07:00.229 ' 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:00.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.229 --rc genhtml_branch_coverage=1 00:07:00.229 --rc genhtml_function_coverage=1 00:07:00.229 --rc genhtml_legend=1 00:07:00.229 --rc geninfo_all_blocks=1 00:07:00.229 --rc geninfo_unexecuted_blocks=1 00:07:00.229 00:07:00.229 ' 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:00.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.229 --rc genhtml_branch_coverage=1 00:07:00.229 --rc genhtml_function_coverage=1 00:07:00.229 --rc genhtml_legend=1 00:07:00.229 --rc geninfo_all_blocks=1 00:07:00.229 --rc geninfo_unexecuted_blocks=1 00:07:00.229 00:07:00.229 ' 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:00.229 ************************************ 00:07:00.229 START TEST dd_malloc_copy 00:07:00.229 ************************************ 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:00.229 13:47:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:00.229 [2024-12-11 13:47:53.137532] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:00.229 [2024-12-11 13:47:53.137626] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61937 ] 00:07:00.229 { 00:07:00.230 "subsystems": [ 00:07:00.230 { 00:07:00.230 "subsystem": "bdev", 00:07:00.230 "config": [ 00:07:00.230 { 00:07:00.230 "params": { 00:07:00.230 "block_size": 512, 00:07:00.230 "num_blocks": 1048576, 00:07:00.230 "name": "malloc0" 00:07:00.230 }, 00:07:00.230 "method": "bdev_malloc_create" 00:07:00.230 }, 00:07:00.230 { 00:07:00.230 "params": { 00:07:00.230 "block_size": 512, 00:07:00.230 "num_blocks": 1048576, 00:07:00.230 "name": "malloc1" 00:07:00.230 }, 00:07:00.230 "method": "bdev_malloc_create" 00:07:00.230 }, 00:07:00.230 { 00:07:00.230 "method": "bdev_wait_for_examine" 00:07:00.230 } 00:07:00.230 ] 00:07:00.230 } 00:07:00.230 ] 00:07:00.230 } 00:07:00.489 [2024-12-11 13:47:53.290275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.489 [2024-12-11 13:47:53.350601] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.489 [2024-12-11 13:47:53.409303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.862  [2024-12-11T13:47:55.843Z] Copying: 197/512 [MB] (197 MBps) [2024-12-11T13:47:56.408Z] Copying: 398/512 [MB] (200 MBps) [2024-12-11T13:47:56.975Z] Copying: 512/512 [MB] (average 200 MBps) 00:07:03.928 00:07:03.928 13:47:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:03.928 13:47:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:03.928 13:47:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:03.928 13:47:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:03.928 [2024-12-11 13:47:56.938333] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:03.928 [2024-12-11 13:47:56.938443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61984 ] 00:07:03.928 { 00:07:03.928 "subsystems": [ 00:07:03.928 { 00:07:03.928 "subsystem": "bdev", 00:07:03.928 "config": [ 00:07:03.928 { 00:07:03.928 "params": { 00:07:03.928 "block_size": 512, 00:07:03.928 "num_blocks": 1048576, 00:07:03.928 "name": "malloc0" 00:07:03.928 }, 00:07:03.928 "method": "bdev_malloc_create" 00:07:03.928 }, 00:07:03.928 { 00:07:03.928 "params": { 00:07:03.928 "block_size": 512, 00:07:03.928 "num_blocks": 1048576, 00:07:03.928 "name": "malloc1" 00:07:03.928 }, 00:07:03.928 "method": "bdev_malloc_create" 00:07:03.928 }, 00:07:03.928 { 00:07:03.928 "method": "bdev_wait_for_examine" 00:07:03.928 } 00:07:03.928 ] 00:07:03.928 } 00:07:03.928 ] 00:07:03.928 } 00:07:04.187 [2024-12-11 13:47:57.085998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.187 [2024-12-11 13:47:57.138057] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.187 [2024-12-11 13:47:57.192732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.562  [2024-12-11T13:47:59.543Z] Copying: 207/512 [MB] (207 MBps) [2024-12-11T13:48:00.110Z] Copying: 413/512 [MB] (206 MBps) [2024-12-11T13:48:00.676Z] Copying: 512/512 [MB] (average 206 MBps) 00:07:07.629 00:07:07.629 00:07:07.629 real 0m7.502s 00:07:07.629 user 0m6.483s 00:07:07.629 sys 0m0.852s 00:07:07.629 13:48:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.629 13:48:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:07.629 ************************************ 00:07:07.629 END TEST dd_malloc_copy 00:07:07.629 ************************************ 00:07:07.629 00:07:07.629 real 0m7.769s 00:07:07.629 user 0m6.655s 00:07:07.629 sys 0m0.953s 00:07:07.629 ************************************ 00:07:07.629 END TEST spdk_dd_malloc 00:07:07.629 ************************************ 00:07:07.629 13:48:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.629 13:48:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:07.629 13:48:00 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:07.629 13:48:00 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:07.629 13:48:00 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.629 13:48:00 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:07.889 ************************************ 00:07:07.889 START TEST spdk_dd_bdev_to_bdev 00:07:07.889 ************************************ 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:07.889 * Looking for test storage... 00:07:07.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lcov --version 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:07.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.889 --rc genhtml_branch_coverage=1 00:07:07.889 --rc genhtml_function_coverage=1 00:07:07.889 --rc genhtml_legend=1 00:07:07.889 --rc geninfo_all_blocks=1 00:07:07.889 --rc geninfo_unexecuted_blocks=1 00:07:07.889 00:07:07.889 ' 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:07.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.889 --rc genhtml_branch_coverage=1 00:07:07.889 --rc genhtml_function_coverage=1 00:07:07.889 --rc genhtml_legend=1 00:07:07.889 --rc geninfo_all_blocks=1 00:07:07.889 --rc geninfo_unexecuted_blocks=1 00:07:07.889 00:07:07.889 ' 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:07.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.889 --rc genhtml_branch_coverage=1 00:07:07.889 --rc genhtml_function_coverage=1 00:07:07.889 --rc genhtml_legend=1 00:07:07.889 --rc geninfo_all_blocks=1 00:07:07.889 --rc geninfo_unexecuted_blocks=1 00:07:07.889 00:07:07.889 ' 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:07.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.889 --rc genhtml_branch_coverage=1 00:07:07.889 --rc genhtml_function_coverage=1 00:07:07.889 --rc genhtml_legend=1 00:07:07.889 --rc geninfo_all_blocks=1 00:07:07.889 --rc geninfo_unexecuted_blocks=1 00:07:07.889 00:07:07.889 ' 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:07.889 ************************************ 00:07:07.889 START TEST dd_inflate_file 00:07:07.889 ************************************ 00:07:07.889 13:48:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:08.147 [2024-12-11 13:48:00.935352] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:08.147 [2024-12-11 13:48:00.935636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62102 ] 00:07:08.147 [2024-12-11 13:48:01.085225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.147 [2024-12-11 13:48:01.131696] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.147 [2024-12-11 13:48:01.186330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.405  [2024-12-11T13:48:01.452Z] Copying: 64/64 [MB] (average 1641 MBps) 00:07:08.405 00:07:08.405 00:07:08.405 real 0m0.565s 00:07:08.405 user 0m0.319s 00:07:08.405 sys 0m0.295s 00:07:08.405 13:48:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.405 ************************************ 00:07:08.405 END TEST dd_inflate_file 00:07:08.405 ************************************ 00:07:08.405 13:48:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:08.663 13:48:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:08.663 13:48:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:08.663 13:48:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:08.663 13:48:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:08.663 13:48:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:08.663 13:48:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:08.663 13:48:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:08.663 13:48:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.663 13:48:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:08.663 ************************************ 00:07:08.663 START TEST dd_copy_to_out_bdev 00:07:08.663 ************************************ 00:07:08.663 13:48:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:08.663 { 00:07:08.663 "subsystems": [ 00:07:08.663 { 00:07:08.663 "subsystem": "bdev", 00:07:08.663 "config": [ 00:07:08.663 { 00:07:08.663 "params": { 00:07:08.663 "trtype": "pcie", 00:07:08.663 "traddr": "0000:00:10.0", 00:07:08.663 "name": "Nvme0" 00:07:08.663 }, 00:07:08.663 "method": "bdev_nvme_attach_controller" 00:07:08.663 }, 00:07:08.663 { 00:07:08.663 "params": { 00:07:08.663 "trtype": "pcie", 00:07:08.663 "traddr": "0000:00:11.0", 00:07:08.663 "name": "Nvme1" 00:07:08.663 }, 00:07:08.663 "method": "bdev_nvme_attach_controller" 00:07:08.663 }, 00:07:08.663 { 00:07:08.663 "method": "bdev_wait_for_examine" 00:07:08.663 } 00:07:08.663 ] 00:07:08.663 } 00:07:08.663 ] 00:07:08.663 } 00:07:08.663 [2024-12-11 13:48:01.575862] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:08.663 [2024-12-11 13:48:01.575965] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62136 ] 00:07:08.923 [2024-12-11 13:48:01.729020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.923 [2024-12-11 13:48:01.779288] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.923 [2024-12-11 13:48:01.833995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.298  [2024-12-11T13:48:03.345Z] Copying: 57/64 [MB] (57 MBps) [2024-12-11T13:48:03.604Z] Copying: 64/64 [MB] (average 57 MBps) 00:07:10.557 00:07:10.557 ************************************ 00:07:10.557 END TEST dd_copy_to_out_bdev 00:07:10.557 ************************************ 00:07:10.557 00:07:10.557 real 0m1.856s 00:07:10.557 user 0m1.637s 00:07:10.557 sys 0m1.475s 00:07:10.557 13:48:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.557 13:48:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:10.557 13:48:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:10.557 13:48:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:10.557 13:48:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.557 13:48:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.557 13:48:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:10.557 ************************************ 00:07:10.557 START TEST dd_offset_magic 00:07:10.557 ************************************ 00:07:10.557 13:48:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:07:10.557 13:48:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:10.557 13:48:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:10.557 13:48:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:10.557 13:48:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:10.557 13:48:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:10.557 13:48:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:10.557 13:48:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:10.557 13:48:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:10.557 [2024-12-11 13:48:03.460685] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:10.557 [2024-12-11 13:48:03.460953] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62181 ] 00:07:10.557 { 00:07:10.557 "subsystems": [ 00:07:10.557 { 00:07:10.557 "subsystem": "bdev", 00:07:10.557 "config": [ 00:07:10.557 { 00:07:10.557 "params": { 00:07:10.557 "trtype": "pcie", 00:07:10.557 "traddr": "0000:00:10.0", 00:07:10.557 "name": "Nvme0" 00:07:10.557 }, 00:07:10.557 "method": "bdev_nvme_attach_controller" 00:07:10.557 }, 00:07:10.557 { 00:07:10.557 "params": { 00:07:10.557 "trtype": "pcie", 00:07:10.557 "traddr": "0000:00:11.0", 00:07:10.557 "name": "Nvme1" 00:07:10.557 }, 00:07:10.557 "method": "bdev_nvme_attach_controller" 00:07:10.557 }, 00:07:10.557 { 00:07:10.557 "method": "bdev_wait_for_examine" 00:07:10.557 } 00:07:10.557 ] 00:07:10.557 } 00:07:10.557 ] 00:07:10.557 } 00:07:10.815 [2024-12-11 13:48:03.605473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.815 [2024-12-11 13:48:03.655324] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.815 [2024-12-11 13:48:03.710229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.074  [2024-12-11T13:48:04.379Z] Copying: 65/65 [MB] (average 942 MBps) 00:07:11.332 00:07:11.332 13:48:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:11.332 13:48:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:11.332 13:48:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:11.332 13:48:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:11.332 [2024-12-11 13:48:04.230438] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:11.332 [2024-12-11 13:48:04.230510] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62195 ] 00:07:11.332 { 00:07:11.332 "subsystems": [ 00:07:11.332 { 00:07:11.332 "subsystem": "bdev", 00:07:11.332 "config": [ 00:07:11.332 { 00:07:11.332 "params": { 00:07:11.333 "trtype": "pcie", 00:07:11.333 "traddr": "0000:00:10.0", 00:07:11.333 "name": "Nvme0" 00:07:11.333 }, 00:07:11.333 "method": "bdev_nvme_attach_controller" 00:07:11.333 }, 00:07:11.333 { 00:07:11.333 "params": { 00:07:11.333 "trtype": "pcie", 00:07:11.333 "traddr": "0000:00:11.0", 00:07:11.333 "name": "Nvme1" 00:07:11.333 }, 00:07:11.333 "method": "bdev_nvme_attach_controller" 00:07:11.333 }, 00:07:11.333 { 00:07:11.333 "method": "bdev_wait_for_examine" 00:07:11.333 } 00:07:11.333 ] 00:07:11.333 } 00:07:11.333 ] 00:07:11.333 } 00:07:11.333 [2024-12-11 13:48:04.371291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.591 [2024-12-11 13:48:04.414689] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.591 [2024-12-11 13:48:04.467570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.850  [2024-12-11T13:48:04.897Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:11.850 00:07:11.850 13:48:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:11.850 13:48:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:11.850 13:48:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:11.850 13:48:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:11.850 13:48:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:11.850 13:48:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:11.850 13:48:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:11.850 [2024-12-11 13:48:04.892359] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:11.850 [2024-12-11 13:48:04.892643] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62212 ] 00:07:11.850 { 00:07:11.850 "subsystems": [ 00:07:11.850 { 00:07:11.850 "subsystem": "bdev", 00:07:11.850 "config": [ 00:07:11.850 { 00:07:11.850 "params": { 00:07:11.850 "trtype": "pcie", 00:07:11.850 "traddr": "0000:00:10.0", 00:07:11.850 "name": "Nvme0" 00:07:11.850 }, 00:07:11.850 "method": "bdev_nvme_attach_controller" 00:07:11.850 }, 00:07:11.850 { 00:07:11.850 "params": { 00:07:11.850 "trtype": "pcie", 00:07:11.850 "traddr": "0000:00:11.0", 00:07:11.850 "name": "Nvme1" 00:07:11.850 }, 00:07:11.850 "method": "bdev_nvme_attach_controller" 00:07:11.850 }, 00:07:11.850 { 00:07:11.850 "method": "bdev_wait_for_examine" 00:07:11.850 } 00:07:11.850 ] 00:07:11.850 } 00:07:11.850 ] 00:07:11.850 } 00:07:12.109 [2024-12-11 13:48:05.036306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.109 [2024-12-11 13:48:05.095041] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.109 [2024-12-11 13:48:05.148520] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.366  [2024-12-11T13:48:05.672Z] Copying: 65/65 [MB] (average 1031 MBps) 00:07:12.625 00:07:12.625 13:48:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:12.625 13:48:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:12.625 13:48:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:12.625 13:48:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:12.883 [2024-12-11 13:48:05.711793] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:12.883 [2024-12-11 13:48:05.711895] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62232 ] 00:07:12.883 { 00:07:12.883 "subsystems": [ 00:07:12.883 { 00:07:12.883 "subsystem": "bdev", 00:07:12.883 "config": [ 00:07:12.883 { 00:07:12.883 "params": { 00:07:12.883 "trtype": "pcie", 00:07:12.883 "traddr": "0000:00:10.0", 00:07:12.883 "name": "Nvme0" 00:07:12.883 }, 00:07:12.883 "method": "bdev_nvme_attach_controller" 00:07:12.883 }, 00:07:12.883 { 00:07:12.883 "params": { 00:07:12.883 "trtype": "pcie", 00:07:12.883 "traddr": "0000:00:11.0", 00:07:12.883 "name": "Nvme1" 00:07:12.883 }, 00:07:12.883 "method": "bdev_nvme_attach_controller" 00:07:12.883 }, 00:07:12.883 { 00:07:12.883 "method": "bdev_wait_for_examine" 00:07:12.883 } 00:07:12.883 ] 00:07:12.883 } 00:07:12.883 ] 00:07:12.883 } 00:07:12.883 [2024-12-11 13:48:05.859375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.197 [2024-12-11 13:48:05.940182] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.197 [2024-12-11 13:48:05.995201] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.197  [2024-12-11T13:48:06.524Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:13.477 00:07:13.477 13:48:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:13.477 13:48:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:13.477 00:07:13.477 real 0m2.956s 00:07:13.477 user 0m2.147s 00:07:13.477 sys 0m0.885s 00:07:13.477 13:48:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.477 ************************************ 00:07:13.477 END TEST dd_offset_magic 00:07:13.477 ************************************ 00:07:13.477 13:48:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:13.477 13:48:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:13.477 13:48:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:13.477 13:48:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:13.477 13:48:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:13.477 13:48:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:13.477 13:48:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:13.477 13:48:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:13.477 13:48:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:13.477 13:48:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:13.477 13:48:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:13.477 13:48:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:13.477 [2024-12-11 13:48:06.467636] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:13.477 [2024-12-11 13:48:06.467762] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62269 ] 00:07:13.477 { 00:07:13.477 "subsystems": [ 00:07:13.477 { 00:07:13.477 "subsystem": "bdev", 00:07:13.477 "config": [ 00:07:13.477 { 00:07:13.477 "params": { 00:07:13.477 "trtype": "pcie", 00:07:13.477 "traddr": "0000:00:10.0", 00:07:13.477 "name": "Nvme0" 00:07:13.477 }, 00:07:13.477 "method": "bdev_nvme_attach_controller" 00:07:13.477 }, 00:07:13.477 { 00:07:13.477 "params": { 00:07:13.477 "trtype": "pcie", 00:07:13.477 "traddr": "0000:00:11.0", 00:07:13.477 "name": "Nvme1" 00:07:13.477 }, 00:07:13.477 "method": "bdev_nvme_attach_controller" 00:07:13.477 }, 00:07:13.477 { 00:07:13.477 "method": "bdev_wait_for_examine" 00:07:13.477 } 00:07:13.477 ] 00:07:13.477 } 00:07:13.477 ] 00:07:13.477 } 00:07:13.736 [2024-12-11 13:48:06.613781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.736 [2024-12-11 13:48:06.673881] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.736 [2024-12-11 13:48:06.727100] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.995  [2024-12-11T13:48:07.300Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:07:14.253 00:07:14.253 13:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:14.253 13:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:14.253 13:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:14.253 13:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:14.253 13:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:14.253 13:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:14.253 13:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:14.253 13:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:14.253 13:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:14.253 13:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:14.253 [2024-12-11 13:48:07.181363] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:14.253 [2024-12-11 13:48:07.181494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62279 ] 00:07:14.253 { 00:07:14.253 "subsystems": [ 00:07:14.253 { 00:07:14.253 "subsystem": "bdev", 00:07:14.253 "config": [ 00:07:14.253 { 00:07:14.253 "params": { 00:07:14.253 "trtype": "pcie", 00:07:14.253 "traddr": "0000:00:10.0", 00:07:14.253 "name": "Nvme0" 00:07:14.253 }, 00:07:14.253 "method": "bdev_nvme_attach_controller" 00:07:14.253 }, 00:07:14.253 { 00:07:14.253 "params": { 00:07:14.253 "trtype": "pcie", 00:07:14.253 "traddr": "0000:00:11.0", 00:07:14.253 "name": "Nvme1" 00:07:14.253 }, 00:07:14.253 "method": "bdev_nvme_attach_controller" 00:07:14.253 }, 00:07:14.254 { 00:07:14.254 "method": "bdev_wait_for_examine" 00:07:14.254 } 00:07:14.254 ] 00:07:14.254 } 00:07:14.254 ] 00:07:14.254 } 00:07:14.512 [2024-12-11 13:48:07.338297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.512 [2024-12-11 13:48:07.406233] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.512 [2024-12-11 13:48:07.464129] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.771  [2024-12-11T13:48:08.077Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:07:15.030 00:07:15.030 13:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:15.030 ************************************ 00:07:15.030 END TEST spdk_dd_bdev_to_bdev 00:07:15.030 ************************************ 00:07:15.030 00:07:15.030 real 0m7.198s 00:07:15.030 user 0m5.330s 00:07:15.030 sys 0m3.372s 00:07:15.030 13:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.030 13:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:15.030 13:48:07 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:15.030 13:48:07 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:15.030 13:48:07 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.030 13:48:07 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.030 13:48:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:15.030 ************************************ 00:07:15.030 START TEST spdk_dd_uring 00:07:15.030 ************************************ 00:07:15.030 13:48:07 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:15.030 * Looking for test storage... 00:07:15.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:15.030 13:48:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:15.030 13:48:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lcov --version 00:07:15.030 13:48:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:15.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.289 --rc genhtml_branch_coverage=1 00:07:15.289 --rc genhtml_function_coverage=1 00:07:15.289 --rc genhtml_legend=1 00:07:15.289 --rc geninfo_all_blocks=1 00:07:15.289 --rc geninfo_unexecuted_blocks=1 00:07:15.289 00:07:15.289 ' 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:15.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.289 --rc genhtml_branch_coverage=1 00:07:15.289 --rc genhtml_function_coverage=1 00:07:15.289 --rc genhtml_legend=1 00:07:15.289 --rc geninfo_all_blocks=1 00:07:15.289 --rc geninfo_unexecuted_blocks=1 00:07:15.289 00:07:15.289 ' 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:15.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.289 --rc genhtml_branch_coverage=1 00:07:15.289 --rc genhtml_function_coverage=1 00:07:15.289 --rc genhtml_legend=1 00:07:15.289 --rc geninfo_all_blocks=1 00:07:15.289 --rc geninfo_unexecuted_blocks=1 00:07:15.289 00:07:15.289 ' 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:15.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.289 --rc genhtml_branch_coverage=1 00:07:15.289 --rc genhtml_function_coverage=1 00:07:15.289 --rc genhtml_legend=1 00:07:15.289 --rc geninfo_all_blocks=1 00:07:15.289 --rc geninfo_unexecuted_blocks=1 00:07:15.289 00:07:15.289 ' 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:15.289 ************************************ 00:07:15.289 START TEST dd_uring_copy 00:07:15.289 ************************************ 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:07:15.289 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:15.290 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:07:15.290 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:15.290 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:15.290 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:07:15.290 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:07:15.290 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:07:15.290 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:07:15.290 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:15.290 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:15.290 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:15.290 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:15.290 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:15.290 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:15.290 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:15.290 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:15.290 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:15.290 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=4zxey73rr2gx6jz3fklxadku6selrvp6hx0phswl93kd374lpwbwlhu42146fju8dc1kizsc49v0ns4j4b54d31moih0jxxar2cautqt44h2ndrzjmi2bz1xyqyrx57k8n2j8y9fvbdchpamqnuwlgi74ii6wr1vpu5taxfixh2nlt6sneah3yozlp6vjoiwsvmlfu38mca8jnoe6ivkiq2ovp7s4vlktugxzaf3bmbcnveeu3jgfsosbo9d6smhev64piqq08p42axdcxn5115cqafi6io1d1y0xtad8vyq9gya514k34rd5wy0od5mpg8izlor9slnl4maxnt8xl2bgxunt2mxpmvlarexd8q49d9frt2pv4hwz994k8dz5wbduzdjogn80mbkgr40wawk19fmi7tn855x4w4etzdkiz6jyhsijt2pw9mex3g4xzw2wfmt08lrt0n7qwkdf2szfcv9l1fto94pwcx313wszpfz94dt05ecvjakn0icaygc4gpliva82rljiyfw6u7sics9ax95x9b3x7vukirt2owty0ofmw3ig4t139whg9lz7if4mrm071f30cgiu26lz7i63kvc1uzdko9rn17s05auloiqa17e5bamscahxnvvejrs92dkhe8y82lxbvv5lg4rqx32bgdbw31r78ef6ru7xiz9ttcvpkqn6jofubo82euh8klgmxjkp4uofyuyp0sd51yvfr02faqswfisxt43n02bjnb2ix56jeyg7qf4sdxqt2rh4kptnw69t19rh7plqorb332yz75d5c696r4vj1q0ly4j3kf18tyh8d5rew8mdcgjt74uwugev2vdpb7eer4gsmhf5anuw98b5xnpqqyxod9l5helpnit7bu9yo9z5j2wde9inu2ahj80r4ay5tjpunupytikan2dz2tn3hq9d8smdt802ld974w5zx19zbi1mmgiurodtfnfsdl7b0v6ze8n0g9xrlxtn9marti5ictkljt5lnji 00:07:15.290 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 4zxey73rr2gx6jz3fklxadku6selrvp6hx0phswl93kd374lpwbwlhu42146fju8dc1kizsc49v0ns4j4b54d31moih0jxxar2cautqt44h2ndrzjmi2bz1xyqyrx57k8n2j8y9fvbdchpamqnuwlgi74ii6wr1vpu5taxfixh2nlt6sneah3yozlp6vjoiwsvmlfu38mca8jnoe6ivkiq2ovp7s4vlktugxzaf3bmbcnveeu3jgfsosbo9d6smhev64piqq08p42axdcxn5115cqafi6io1d1y0xtad8vyq9gya514k34rd5wy0od5mpg8izlor9slnl4maxnt8xl2bgxunt2mxpmvlarexd8q49d9frt2pv4hwz994k8dz5wbduzdjogn80mbkgr40wawk19fmi7tn855x4w4etzdkiz6jyhsijt2pw9mex3g4xzw2wfmt08lrt0n7qwkdf2szfcv9l1fto94pwcx313wszpfz94dt05ecvjakn0icaygc4gpliva82rljiyfw6u7sics9ax95x9b3x7vukirt2owty0ofmw3ig4t139whg9lz7if4mrm071f30cgiu26lz7i63kvc1uzdko9rn17s05auloiqa17e5bamscahxnvvejrs92dkhe8y82lxbvv5lg4rqx32bgdbw31r78ef6ru7xiz9ttcvpkqn6jofubo82euh8klgmxjkp4uofyuyp0sd51yvfr02faqswfisxt43n02bjnb2ix56jeyg7qf4sdxqt2rh4kptnw69t19rh7plqorb332yz75d5c696r4vj1q0ly4j3kf18tyh8d5rew8mdcgjt74uwugev2vdpb7eer4gsmhf5anuw98b5xnpqqyxod9l5helpnit7bu9yo9z5j2wde9inu2ahj80r4ay5tjpunupytikan2dz2tn3hq9d8smdt802ld974w5zx19zbi1mmgiurodtfnfsdl7b0v6ze8n0g9xrlxtn9marti5ictkljt5lnji 00:07:15.290 13:48:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:15.290 [2024-12-11 13:48:08.218080] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:15.290 [2024-12-11 13:48:08.218357] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62357 ] 00:07:15.548 [2024-12-11 13:48:08.365806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.548 [2024-12-11 13:48:08.428117] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.548 [2024-12-11 13:48:08.481565] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.115  [2024-12-11T13:48:09.729Z] Copying: 511/511 [MB] (average 1434 MBps) 00:07:16.682 00:07:16.682 13:48:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:16.682 13:48:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:16.682 13:48:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:16.682 13:48:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:16.682 [2024-12-11 13:48:09.507536] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:16.682 [2024-12-11 13:48:09.507971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62384 ] 00:07:16.682 { 00:07:16.682 "subsystems": [ 00:07:16.682 { 00:07:16.682 "subsystem": "bdev", 00:07:16.682 "config": [ 00:07:16.682 { 00:07:16.682 "params": { 00:07:16.682 "block_size": 512, 00:07:16.682 "num_blocks": 1048576, 00:07:16.682 "name": "malloc0" 00:07:16.682 }, 00:07:16.682 "method": "bdev_malloc_create" 00:07:16.682 }, 00:07:16.682 { 00:07:16.682 "params": { 00:07:16.682 "filename": "/dev/zram1", 00:07:16.682 "name": "uring0" 00:07:16.682 }, 00:07:16.682 "method": "bdev_uring_create" 00:07:16.682 }, 00:07:16.682 { 00:07:16.682 "method": "bdev_wait_for_examine" 00:07:16.682 } 00:07:16.682 ] 00:07:16.682 } 00:07:16.682 ] 00:07:16.682 } 00:07:16.682 [2024-12-11 13:48:09.654965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.682 [2024-12-11 13:48:09.715115] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.940 [2024-12-11 13:48:09.769422] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.314  [2024-12-11T13:48:12.294Z] Copying: 221/512 [MB] (221 MBps) [2024-12-11T13:48:12.294Z] Copying: 441/512 [MB] (219 MBps) [2024-12-11T13:48:12.888Z] Copying: 512/512 [MB] (average 221 MBps) 00:07:19.841 00:07:19.841 13:48:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:19.841 13:48:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:19.841 13:48:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:19.841 13:48:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:19.841 [2024-12-11 13:48:12.760842] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:19.841 [2024-12-11 13:48:12.760955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62428 ] 00:07:19.841 { 00:07:19.841 "subsystems": [ 00:07:19.841 { 00:07:19.841 "subsystem": "bdev", 00:07:19.841 "config": [ 00:07:19.841 { 00:07:19.841 "params": { 00:07:19.841 "block_size": 512, 00:07:19.841 "num_blocks": 1048576, 00:07:19.841 "name": "malloc0" 00:07:19.841 }, 00:07:19.841 "method": "bdev_malloc_create" 00:07:19.841 }, 00:07:19.841 { 00:07:19.841 "params": { 00:07:19.841 "filename": "/dev/zram1", 00:07:19.841 "name": "uring0" 00:07:19.841 }, 00:07:19.841 "method": "bdev_uring_create" 00:07:19.841 }, 00:07:19.841 { 00:07:19.841 "method": "bdev_wait_for_examine" 00:07:19.841 } 00:07:19.841 ] 00:07:19.841 } 00:07:19.841 ] 00:07:19.841 } 00:07:20.099 [2024-12-11 13:48:12.910872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.099 [2024-12-11 13:48:12.969035] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.099 [2024-12-11 13:48:13.021981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.474  [2024-12-11T13:48:15.454Z] Copying: 169/512 [MB] (169 MBps) [2024-12-11T13:48:16.388Z] Copying: 336/512 [MB] (167 MBps) [2024-12-11T13:48:16.388Z] Copying: 499/512 [MB] (162 MBps) [2024-12-11T13:48:16.954Z] Copying: 512/512 [MB] (average 166 MBps) 00:07:23.907 00:07:23.907 13:48:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:23.907 13:48:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 4zxey73rr2gx6jz3fklxadku6selrvp6hx0phswl93kd374lpwbwlhu42146fju8dc1kizsc49v0ns4j4b54d31moih0jxxar2cautqt44h2ndrzjmi2bz1xyqyrx57k8n2j8y9fvbdchpamqnuwlgi74ii6wr1vpu5taxfixh2nlt6sneah3yozlp6vjoiwsvmlfu38mca8jnoe6ivkiq2ovp7s4vlktugxzaf3bmbcnveeu3jgfsosbo9d6smhev64piqq08p42axdcxn5115cqafi6io1d1y0xtad8vyq9gya514k34rd5wy0od5mpg8izlor9slnl4maxnt8xl2bgxunt2mxpmvlarexd8q49d9frt2pv4hwz994k8dz5wbduzdjogn80mbkgr40wawk19fmi7tn855x4w4etzdkiz6jyhsijt2pw9mex3g4xzw2wfmt08lrt0n7qwkdf2szfcv9l1fto94pwcx313wszpfz94dt05ecvjakn0icaygc4gpliva82rljiyfw6u7sics9ax95x9b3x7vukirt2owty0ofmw3ig4t139whg9lz7if4mrm071f30cgiu26lz7i63kvc1uzdko9rn17s05auloiqa17e5bamscahxnvvejrs92dkhe8y82lxbvv5lg4rqx32bgdbw31r78ef6ru7xiz9ttcvpkqn6jofubo82euh8klgmxjkp4uofyuyp0sd51yvfr02faqswfisxt43n02bjnb2ix56jeyg7qf4sdxqt2rh4kptnw69t19rh7plqorb332yz75d5c696r4vj1q0ly4j3kf18tyh8d5rew8mdcgjt74uwugev2vdpb7eer4gsmhf5anuw98b5xnpqqyxod9l5helpnit7bu9yo9z5j2wde9inu2ahj80r4ay5tjpunupytikan2dz2tn3hq9d8smdt802ld974w5zx19zbi1mmgiurodtfnfsdl7b0v6ze8n0g9xrlxtn9marti5ictkljt5lnji == \4\z\x\e\y\7\3\r\r\2\g\x\6\j\z\3\f\k\l\x\a\d\k\u\6\s\e\l\r\v\p\6\h\x\0\p\h\s\w\l\9\3\k\d\3\7\4\l\p\w\b\w\l\h\u\4\2\1\4\6\f\j\u\8\d\c\1\k\i\z\s\c\4\9\v\0\n\s\4\j\4\b\5\4\d\3\1\m\o\i\h\0\j\x\x\a\r\2\c\a\u\t\q\t\4\4\h\2\n\d\r\z\j\m\i\2\b\z\1\x\y\q\y\r\x\5\7\k\8\n\2\j\8\y\9\f\v\b\d\c\h\p\a\m\q\n\u\w\l\g\i\7\4\i\i\6\w\r\1\v\p\u\5\t\a\x\f\i\x\h\2\n\l\t\6\s\n\e\a\h\3\y\o\z\l\p\6\v\j\o\i\w\s\v\m\l\f\u\3\8\m\c\a\8\j\n\o\e\6\i\v\k\i\q\2\o\v\p\7\s\4\v\l\k\t\u\g\x\z\a\f\3\b\m\b\c\n\v\e\e\u\3\j\g\f\s\o\s\b\o\9\d\6\s\m\h\e\v\6\4\p\i\q\q\0\8\p\4\2\a\x\d\c\x\n\5\1\1\5\c\q\a\f\i\6\i\o\1\d\1\y\0\x\t\a\d\8\v\y\q\9\g\y\a\5\1\4\k\3\4\r\d\5\w\y\0\o\d\5\m\p\g\8\i\z\l\o\r\9\s\l\n\l\4\m\a\x\n\t\8\x\l\2\b\g\x\u\n\t\2\m\x\p\m\v\l\a\r\e\x\d\8\q\4\9\d\9\f\r\t\2\p\v\4\h\w\z\9\9\4\k\8\d\z\5\w\b\d\u\z\d\j\o\g\n\8\0\m\b\k\g\r\4\0\w\a\w\k\1\9\f\m\i\7\t\n\8\5\5\x\4\w\4\e\t\z\d\k\i\z\6\j\y\h\s\i\j\t\2\p\w\9\m\e\x\3\g\4\x\z\w\2\w\f\m\t\0\8\l\r\t\0\n\7\q\w\k\d\f\2\s\z\f\c\v\9\l\1\f\t\o\9\4\p\w\c\x\3\1\3\w\s\z\p\f\z\9\4\d\t\0\5\e\c\v\j\a\k\n\0\i\c\a\y\g\c\4\g\p\l\i\v\a\8\2\r\l\j\i\y\f\w\6\u\7\s\i\c\s\9\a\x\9\5\x\9\b\3\x\7\v\u\k\i\r\t\2\o\w\t\y\0\o\f\m\w\3\i\g\4\t\1\3\9\w\h\g\9\l\z\7\i\f\4\m\r\m\0\7\1\f\3\0\c\g\i\u\2\6\l\z\7\i\6\3\k\v\c\1\u\z\d\k\o\9\r\n\1\7\s\0\5\a\u\l\o\i\q\a\1\7\e\5\b\a\m\s\c\a\h\x\n\v\v\e\j\r\s\9\2\d\k\h\e\8\y\8\2\l\x\b\v\v\5\l\g\4\r\q\x\3\2\b\g\d\b\w\3\1\r\7\8\e\f\6\r\u\7\x\i\z\9\t\t\c\v\p\k\q\n\6\j\o\f\u\b\o\8\2\e\u\h\8\k\l\g\m\x\j\k\p\4\u\o\f\y\u\y\p\0\s\d\5\1\y\v\f\r\0\2\f\a\q\s\w\f\i\s\x\t\4\3\n\0\2\b\j\n\b\2\i\x\5\6\j\e\y\g\7\q\f\4\s\d\x\q\t\2\r\h\4\k\p\t\n\w\6\9\t\1\9\r\h\7\p\l\q\o\r\b\3\3\2\y\z\7\5\d\5\c\6\9\6\r\4\v\j\1\q\0\l\y\4\j\3\k\f\1\8\t\y\h\8\d\5\r\e\w\8\m\d\c\g\j\t\7\4\u\w\u\g\e\v\2\v\d\p\b\7\e\e\r\4\g\s\m\h\f\5\a\n\u\w\9\8\b\5\x\n\p\q\q\y\x\o\d\9\l\5\h\e\l\p\n\i\t\7\b\u\9\y\o\9\z\5\j\2\w\d\e\9\i\n\u\2\a\h\j\8\0\r\4\a\y\5\t\j\p\u\n\u\p\y\t\i\k\a\n\2\d\z\2\t\n\3\h\q\9\d\8\s\m\d\t\8\0\2\l\d\9\7\4\w\5\z\x\1\9\z\b\i\1\m\m\g\i\u\r\o\d\t\f\n\f\s\d\l\7\b\0\v\6\z\e\8\n\0\g\9\x\r\l\x\t\n\9\m\a\r\t\i\5\i\c\t\k\l\j\t\5\l\n\j\i ]] 00:07:23.907 13:48:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:23.908 13:48:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 4zxey73rr2gx6jz3fklxadku6selrvp6hx0phswl93kd374lpwbwlhu42146fju8dc1kizsc49v0ns4j4b54d31moih0jxxar2cautqt44h2ndrzjmi2bz1xyqyrx57k8n2j8y9fvbdchpamqnuwlgi74ii6wr1vpu5taxfixh2nlt6sneah3yozlp6vjoiwsvmlfu38mca8jnoe6ivkiq2ovp7s4vlktugxzaf3bmbcnveeu3jgfsosbo9d6smhev64piqq08p42axdcxn5115cqafi6io1d1y0xtad8vyq9gya514k34rd5wy0od5mpg8izlor9slnl4maxnt8xl2bgxunt2mxpmvlarexd8q49d9frt2pv4hwz994k8dz5wbduzdjogn80mbkgr40wawk19fmi7tn855x4w4etzdkiz6jyhsijt2pw9mex3g4xzw2wfmt08lrt0n7qwkdf2szfcv9l1fto94pwcx313wszpfz94dt05ecvjakn0icaygc4gpliva82rljiyfw6u7sics9ax95x9b3x7vukirt2owty0ofmw3ig4t139whg9lz7if4mrm071f30cgiu26lz7i63kvc1uzdko9rn17s05auloiqa17e5bamscahxnvvejrs92dkhe8y82lxbvv5lg4rqx32bgdbw31r78ef6ru7xiz9ttcvpkqn6jofubo82euh8klgmxjkp4uofyuyp0sd51yvfr02faqswfisxt43n02bjnb2ix56jeyg7qf4sdxqt2rh4kptnw69t19rh7plqorb332yz75d5c696r4vj1q0ly4j3kf18tyh8d5rew8mdcgjt74uwugev2vdpb7eer4gsmhf5anuw98b5xnpqqyxod9l5helpnit7bu9yo9z5j2wde9inu2ahj80r4ay5tjpunupytikan2dz2tn3hq9d8smdt802ld974w5zx19zbi1mmgiurodtfnfsdl7b0v6ze8n0g9xrlxtn9marti5ictkljt5lnji == \4\z\x\e\y\7\3\r\r\2\g\x\6\j\z\3\f\k\l\x\a\d\k\u\6\s\e\l\r\v\p\6\h\x\0\p\h\s\w\l\9\3\k\d\3\7\4\l\p\w\b\w\l\h\u\4\2\1\4\6\f\j\u\8\d\c\1\k\i\z\s\c\4\9\v\0\n\s\4\j\4\b\5\4\d\3\1\m\o\i\h\0\j\x\x\a\r\2\c\a\u\t\q\t\4\4\h\2\n\d\r\z\j\m\i\2\b\z\1\x\y\q\y\r\x\5\7\k\8\n\2\j\8\y\9\f\v\b\d\c\h\p\a\m\q\n\u\w\l\g\i\7\4\i\i\6\w\r\1\v\p\u\5\t\a\x\f\i\x\h\2\n\l\t\6\s\n\e\a\h\3\y\o\z\l\p\6\v\j\o\i\w\s\v\m\l\f\u\3\8\m\c\a\8\j\n\o\e\6\i\v\k\i\q\2\o\v\p\7\s\4\v\l\k\t\u\g\x\z\a\f\3\b\m\b\c\n\v\e\e\u\3\j\g\f\s\o\s\b\o\9\d\6\s\m\h\e\v\6\4\p\i\q\q\0\8\p\4\2\a\x\d\c\x\n\5\1\1\5\c\q\a\f\i\6\i\o\1\d\1\y\0\x\t\a\d\8\v\y\q\9\g\y\a\5\1\4\k\3\4\r\d\5\w\y\0\o\d\5\m\p\g\8\i\z\l\o\r\9\s\l\n\l\4\m\a\x\n\t\8\x\l\2\b\g\x\u\n\t\2\m\x\p\m\v\l\a\r\e\x\d\8\q\4\9\d\9\f\r\t\2\p\v\4\h\w\z\9\9\4\k\8\d\z\5\w\b\d\u\z\d\j\o\g\n\8\0\m\b\k\g\r\4\0\w\a\w\k\1\9\f\m\i\7\t\n\8\5\5\x\4\w\4\e\t\z\d\k\i\z\6\j\y\h\s\i\j\t\2\p\w\9\m\e\x\3\g\4\x\z\w\2\w\f\m\t\0\8\l\r\t\0\n\7\q\w\k\d\f\2\s\z\f\c\v\9\l\1\f\t\o\9\4\p\w\c\x\3\1\3\w\s\z\p\f\z\9\4\d\t\0\5\e\c\v\j\a\k\n\0\i\c\a\y\g\c\4\g\p\l\i\v\a\8\2\r\l\j\i\y\f\w\6\u\7\s\i\c\s\9\a\x\9\5\x\9\b\3\x\7\v\u\k\i\r\t\2\o\w\t\y\0\o\f\m\w\3\i\g\4\t\1\3\9\w\h\g\9\l\z\7\i\f\4\m\r\m\0\7\1\f\3\0\c\g\i\u\2\6\l\z\7\i\6\3\k\v\c\1\u\z\d\k\o\9\r\n\1\7\s\0\5\a\u\l\o\i\q\a\1\7\e\5\b\a\m\s\c\a\h\x\n\v\v\e\j\r\s\9\2\d\k\h\e\8\y\8\2\l\x\b\v\v\5\l\g\4\r\q\x\3\2\b\g\d\b\w\3\1\r\7\8\e\f\6\r\u\7\x\i\z\9\t\t\c\v\p\k\q\n\6\j\o\f\u\b\o\8\2\e\u\h\8\k\l\g\m\x\j\k\p\4\u\o\f\y\u\y\p\0\s\d\5\1\y\v\f\r\0\2\f\a\q\s\w\f\i\s\x\t\4\3\n\0\2\b\j\n\b\2\i\x\5\6\j\e\y\g\7\q\f\4\s\d\x\q\t\2\r\h\4\k\p\t\n\w\6\9\t\1\9\r\h\7\p\l\q\o\r\b\3\3\2\y\z\7\5\d\5\c\6\9\6\r\4\v\j\1\q\0\l\y\4\j\3\k\f\1\8\t\y\h\8\d\5\r\e\w\8\m\d\c\g\j\t\7\4\u\w\u\g\e\v\2\v\d\p\b\7\e\e\r\4\g\s\m\h\f\5\a\n\u\w\9\8\b\5\x\n\p\q\q\y\x\o\d\9\l\5\h\e\l\p\n\i\t\7\b\u\9\y\o\9\z\5\j\2\w\d\e\9\i\n\u\2\a\h\j\8\0\r\4\a\y\5\t\j\p\u\n\u\p\y\t\i\k\a\n\2\d\z\2\t\n\3\h\q\9\d\8\s\m\d\t\8\0\2\l\d\9\7\4\w\5\z\x\1\9\z\b\i\1\m\m\g\i\u\r\o\d\t\f\n\f\s\d\l\7\b\0\v\6\z\e\8\n\0\g\9\x\r\l\x\t\n\9\m\a\r\t\i\5\i\c\t\k\l\j\t\5\l\n\j\i ]] 00:07:23.908 13:48:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:24.166 13:48:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:24.166 13:48:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:24.166 13:48:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:24.166 13:48:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:24.166 [2024-12-11 13:48:17.097058] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:24.166 [2024-12-11 13:48:17.097142] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62504 ] 00:07:24.166 { 00:07:24.166 "subsystems": [ 00:07:24.166 { 00:07:24.166 "subsystem": "bdev", 00:07:24.166 "config": [ 00:07:24.166 { 00:07:24.166 "params": { 00:07:24.166 "block_size": 512, 00:07:24.166 "num_blocks": 1048576, 00:07:24.166 "name": "malloc0" 00:07:24.166 }, 00:07:24.166 "method": "bdev_malloc_create" 00:07:24.166 }, 00:07:24.166 { 00:07:24.166 "params": { 00:07:24.166 "filename": "/dev/zram1", 00:07:24.166 "name": "uring0" 00:07:24.166 }, 00:07:24.166 "method": "bdev_uring_create" 00:07:24.166 }, 00:07:24.166 { 00:07:24.166 "method": "bdev_wait_for_examine" 00:07:24.166 } 00:07:24.166 ] 00:07:24.166 } 00:07:24.166 ] 00:07:24.166 } 00:07:24.424 [2024-12-11 13:48:17.242474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.424 [2024-12-11 13:48:17.302125] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.424 [2024-12-11 13:48:17.358307] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.808  [2024-12-11T13:48:19.791Z] Copying: 147/512 [MB] (147 MBps) [2024-12-11T13:48:20.726Z] Copying: 295/512 [MB] (147 MBps) [2024-12-11T13:48:21.293Z] Copying: 441/512 [MB] (146 MBps) [2024-12-11T13:48:21.551Z] Copying: 512/512 [MB] (average 146 MBps) 00:07:28.504 00:07:28.504 13:48:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:28.504 13:48:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:28.504 13:48:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:28.504 13:48:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:28.504 13:48:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:28.504 13:48:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:28.504 13:48:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:28.504 13:48:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:28.504 [2024-12-11 13:48:21.491240] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:28.504 [2024-12-11 13:48:21.491344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62573 ] 00:07:28.504 { 00:07:28.504 "subsystems": [ 00:07:28.504 { 00:07:28.504 "subsystem": "bdev", 00:07:28.504 "config": [ 00:07:28.504 { 00:07:28.504 "params": { 00:07:28.504 "block_size": 512, 00:07:28.504 "num_blocks": 1048576, 00:07:28.504 "name": "malloc0" 00:07:28.504 }, 00:07:28.504 "method": "bdev_malloc_create" 00:07:28.504 }, 00:07:28.504 { 00:07:28.504 "params": { 00:07:28.504 "filename": "/dev/zram1", 00:07:28.505 "name": "uring0" 00:07:28.505 }, 00:07:28.505 "method": "bdev_uring_create" 00:07:28.505 }, 00:07:28.505 { 00:07:28.505 "params": { 00:07:28.505 "name": "uring0" 00:07:28.505 }, 00:07:28.505 "method": "bdev_uring_delete" 00:07:28.505 }, 00:07:28.505 { 00:07:28.505 "method": "bdev_wait_for_examine" 00:07:28.505 } 00:07:28.505 ] 00:07:28.505 } 00:07:28.505 ] 00:07:28.505 } 00:07:28.763 [2024-12-11 13:48:21.638360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.763 [2024-12-11 13:48:21.692289] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.763 [2024-12-11 13:48:21.746619] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.021  [2024-12-11T13:48:22.635Z] Copying: 0/0 [B] (average 0 Bps) 00:07:29.588 00:07:29.588 13:48:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:29.588 13:48:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:07:29.588 13:48:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:29.588 13:48:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.588 13:48:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:29.588 13:48:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:29.588 13:48:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:29.588 13:48:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:29.588 13:48:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.588 13:48:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.588 13:48:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.588 13:48:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.588 13:48:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.588 13:48:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.588 13:48:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:29.588 13:48:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:29.588 [2024-12-11 13:48:22.398444] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:29.588 [2024-12-11 13:48:22.398544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62602 ] 00:07:29.588 { 00:07:29.588 "subsystems": [ 00:07:29.588 { 00:07:29.588 "subsystem": "bdev", 00:07:29.588 "config": [ 00:07:29.588 { 00:07:29.588 "params": { 00:07:29.588 "block_size": 512, 00:07:29.588 "num_blocks": 1048576, 00:07:29.588 "name": "malloc0" 00:07:29.588 }, 00:07:29.588 "method": "bdev_malloc_create" 00:07:29.588 }, 00:07:29.588 { 00:07:29.588 "params": { 00:07:29.588 "filename": "/dev/zram1", 00:07:29.588 "name": "uring0" 00:07:29.588 }, 00:07:29.588 "method": "bdev_uring_create" 00:07:29.588 }, 00:07:29.588 { 00:07:29.588 "params": { 00:07:29.588 "name": "uring0" 00:07:29.589 }, 00:07:29.589 "method": "bdev_uring_delete" 00:07:29.589 }, 00:07:29.589 { 00:07:29.589 "method": "bdev_wait_for_examine" 00:07:29.589 } 00:07:29.589 ] 00:07:29.589 } 00:07:29.589 ] 00:07:29.589 } 00:07:29.589 [2024-12-11 13:48:22.545449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.589 [2024-12-11 13:48:22.601500] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.846 [2024-12-11 13:48:22.655786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.847 [2024-12-11 13:48:22.861589] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:29.847 [2024-12-11 13:48:22.861649] spdk_dd.c: 931:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:29.847 [2024-12-11 13:48:22.861661] spdk_dd.c:1088:dd_run: *ERROR*: uring0: No such device 00:07:29.847 [2024-12-11 13:48:22.861672] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.413 [2024-12-11 13:48:23.177371] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:30.413 13:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:07:30.413 13:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:30.413 13:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:07:30.413 13:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:07:30.413 13:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:07:30.413 13:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:30.413 13:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:30.413 13:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:30.413 13:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:30.413 13:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:30.413 13:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:30.413 13:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:30.697 00:07:30.697 real 0m15.427s 00:07:30.697 user 0m10.303s 00:07:30.697 sys 0m13.077s 00:07:30.697 13:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.697 13:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:30.697 ************************************ 00:07:30.697 END TEST dd_uring_copy 00:07:30.697 ************************************ 00:07:30.697 00:07:30.697 real 0m15.671s 00:07:30.697 user 0m10.436s 00:07:30.697 sys 0m13.193s 00:07:30.697 13:48:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.697 13:48:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:30.697 ************************************ 00:07:30.697 END TEST spdk_dd_uring 00:07:30.697 ************************************ 00:07:30.697 13:48:23 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:30.697 13:48:23 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.697 13:48:23 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.697 13:48:23 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:30.697 ************************************ 00:07:30.697 START TEST spdk_dd_sparse 00:07:30.697 ************************************ 00:07:30.697 13:48:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:30.697 * Looking for test storage... 00:07:30.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:30.697 13:48:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:30.697 13:48:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:30.697 13:48:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lcov --version 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:30.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.956 --rc genhtml_branch_coverage=1 00:07:30.956 --rc genhtml_function_coverage=1 00:07:30.956 --rc genhtml_legend=1 00:07:30.956 --rc geninfo_all_blocks=1 00:07:30.956 --rc geninfo_unexecuted_blocks=1 00:07:30.956 00:07:30.956 ' 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:30.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.956 --rc genhtml_branch_coverage=1 00:07:30.956 --rc genhtml_function_coverage=1 00:07:30.956 --rc genhtml_legend=1 00:07:30.956 --rc geninfo_all_blocks=1 00:07:30.956 --rc geninfo_unexecuted_blocks=1 00:07:30.956 00:07:30.956 ' 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:30.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.956 --rc genhtml_branch_coverage=1 00:07:30.956 --rc genhtml_function_coverage=1 00:07:30.956 --rc genhtml_legend=1 00:07:30.956 --rc geninfo_all_blocks=1 00:07:30.956 --rc geninfo_unexecuted_blocks=1 00:07:30.956 00:07:30.956 ' 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:30.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.956 --rc genhtml_branch_coverage=1 00:07:30.956 --rc genhtml_function_coverage=1 00:07:30.956 --rc genhtml_legend=1 00:07:30.956 --rc geninfo_all_blocks=1 00:07:30.956 --rc geninfo_unexecuted_blocks=1 00:07:30.956 00:07:30.956 ' 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.956 13:48:23 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:30.957 1+0 records in 00:07:30.957 1+0 records out 00:07:30.957 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00702909 s, 597 MB/s 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:30.957 1+0 records in 00:07:30.957 1+0 records out 00:07:30.957 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00413694 s, 1.0 GB/s 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:30.957 1+0 records in 00:07:30.957 1+0 records out 00:07:30.957 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00723458 s, 580 MB/s 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:30.957 ************************************ 00:07:30.957 START TEST dd_sparse_file_to_file 00:07:30.957 ************************************ 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:30.957 13:48:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:30.957 [2024-12-11 13:48:23.930105] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:30.957 [2024-12-11 13:48:23.930205] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62702 ] 00:07:30.957 { 00:07:30.957 "subsystems": [ 00:07:30.957 { 00:07:30.957 "subsystem": "bdev", 00:07:30.957 "config": [ 00:07:30.957 { 00:07:30.957 "params": { 00:07:30.957 "block_size": 4096, 00:07:30.957 "filename": "dd_sparse_aio_disk", 00:07:30.957 "name": "dd_aio" 00:07:30.957 }, 00:07:30.957 "method": "bdev_aio_create" 00:07:30.957 }, 00:07:30.957 { 00:07:30.957 "params": { 00:07:30.957 "lvs_name": "dd_lvstore", 00:07:30.957 "bdev_name": "dd_aio" 00:07:30.957 }, 00:07:30.957 "method": "bdev_lvol_create_lvstore" 00:07:30.957 }, 00:07:30.957 { 00:07:30.957 "method": "bdev_wait_for_examine" 00:07:30.957 } 00:07:30.957 ] 00:07:30.957 } 00:07:30.957 ] 00:07:30.957 } 00:07:31.215 [2024-12-11 13:48:24.073739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.215 [2024-12-11 13:48:24.128876] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.215 [2024-12-11 13:48:24.184367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.473  [2024-12-11T13:48:24.520Z] Copying: 12/36 [MB] (average 923 MBps) 00:07:31.473 00:07:31.473 13:48:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:31.731 ************************************ 00:07:31.731 END TEST dd_sparse_file_to_file 00:07:31.731 ************************************ 00:07:31.731 00:07:31.731 real 0m0.660s 00:07:31.731 user 0m0.414s 00:07:31.731 sys 0m0.355s 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:31.731 ************************************ 00:07:31.731 START TEST dd_sparse_file_to_bdev 00:07:31.731 ************************************ 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:31.731 13:48:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:31.731 [2024-12-11 13:48:24.635327] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:31.731 [2024-12-11 13:48:24.635426] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62749 ] 00:07:31.731 { 00:07:31.731 "subsystems": [ 00:07:31.731 { 00:07:31.731 "subsystem": "bdev", 00:07:31.731 "config": [ 00:07:31.731 { 00:07:31.731 "params": { 00:07:31.731 "block_size": 4096, 00:07:31.731 "filename": "dd_sparse_aio_disk", 00:07:31.731 "name": "dd_aio" 00:07:31.731 }, 00:07:31.731 "method": "bdev_aio_create" 00:07:31.731 }, 00:07:31.731 { 00:07:31.731 "params": { 00:07:31.731 "lvs_name": "dd_lvstore", 00:07:31.731 "lvol_name": "dd_lvol", 00:07:31.731 "size_in_mib": 36, 00:07:31.731 "thin_provision": true 00:07:31.731 }, 00:07:31.731 "method": "bdev_lvol_create" 00:07:31.731 }, 00:07:31.731 { 00:07:31.731 "method": "bdev_wait_for_examine" 00:07:31.731 } 00:07:31.731 ] 00:07:31.731 } 00:07:31.731 ] 00:07:31.731 } 00:07:31.731 [2024-12-11 13:48:24.776951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.989 [2024-12-11 13:48:24.835864] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.989 [2024-12-11 13:48:24.894497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.989  [2024-12-11T13:48:25.295Z] Copying: 12/36 [MB] (average 521 MBps) 00:07:32.248 00:07:32.248 00:07:32.248 real 0m0.626s 00:07:32.248 user 0m0.393s 00:07:32.248 sys 0m0.353s 00:07:32.248 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.248 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:32.248 ************************************ 00:07:32.248 END TEST dd_sparse_file_to_bdev 00:07:32.248 ************************************ 00:07:32.248 13:48:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:32.248 13:48:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.248 13:48:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.248 13:48:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:32.248 ************************************ 00:07:32.248 START TEST dd_sparse_bdev_to_file 00:07:32.248 ************************************ 00:07:32.248 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:07:32.248 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:32.248 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:32.248 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:32.248 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:32.248 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:32.248 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:32.248 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:32.248 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:32.506 [2024-12-11 13:48:25.316295] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:32.506 [2024-12-11 13:48:25.316400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62787 ] 00:07:32.506 { 00:07:32.506 "subsystems": [ 00:07:32.506 { 00:07:32.506 "subsystem": "bdev", 00:07:32.506 "config": [ 00:07:32.506 { 00:07:32.506 "params": { 00:07:32.506 "block_size": 4096, 00:07:32.506 "filename": "dd_sparse_aio_disk", 00:07:32.506 "name": "dd_aio" 00:07:32.506 }, 00:07:32.506 "method": "bdev_aio_create" 00:07:32.506 }, 00:07:32.506 { 00:07:32.506 "method": "bdev_wait_for_examine" 00:07:32.506 } 00:07:32.506 ] 00:07:32.506 } 00:07:32.506 ] 00:07:32.506 } 00:07:32.506 [2024-12-11 13:48:25.465961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.506 [2024-12-11 13:48:25.522867] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.763 [2024-12-11 13:48:25.578209] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.763  [2024-12-11T13:48:26.068Z] Copying: 12/36 [MB] (average 1200 MBps) 00:07:33.021 00:07:33.021 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:33.021 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:33.021 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:33.021 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:33.021 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:33.021 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:33.021 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:33.021 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:33.021 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:33.021 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:33.021 00:07:33.021 real 0m0.645s 00:07:33.021 user 0m0.407s 00:07:33.021 sys 0m0.346s 00:07:33.021 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.021 13:48:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:33.021 ************************************ 00:07:33.021 END TEST dd_sparse_bdev_to_file 00:07:33.021 ************************************ 00:07:33.021 13:48:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:33.021 13:48:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:33.021 13:48:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:33.021 13:48:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:33.021 13:48:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:33.021 00:07:33.021 real 0m2.325s 00:07:33.021 user 0m1.402s 00:07:33.021 sys 0m1.255s 00:07:33.021 13:48:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.021 13:48:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:33.021 ************************************ 00:07:33.021 END TEST spdk_dd_sparse 00:07:33.021 ************************************ 00:07:33.021 13:48:26 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:33.021 13:48:26 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.021 13:48:26 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.021 13:48:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:33.021 ************************************ 00:07:33.021 START TEST spdk_dd_negative 00:07:33.021 ************************************ 00:07:33.021 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:33.279 * Looking for test storage... 00:07:33.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lcov --version 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:33.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.280 --rc genhtml_branch_coverage=1 00:07:33.280 --rc genhtml_function_coverage=1 00:07:33.280 --rc genhtml_legend=1 00:07:33.280 --rc geninfo_all_blocks=1 00:07:33.280 --rc geninfo_unexecuted_blocks=1 00:07:33.280 00:07:33.280 ' 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:33.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.280 --rc genhtml_branch_coverage=1 00:07:33.280 --rc genhtml_function_coverage=1 00:07:33.280 --rc genhtml_legend=1 00:07:33.280 --rc geninfo_all_blocks=1 00:07:33.280 --rc geninfo_unexecuted_blocks=1 00:07:33.280 00:07:33.280 ' 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:33.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.280 --rc genhtml_branch_coverage=1 00:07:33.280 --rc genhtml_function_coverage=1 00:07:33.280 --rc genhtml_legend=1 00:07:33.280 --rc geninfo_all_blocks=1 00:07:33.280 --rc geninfo_unexecuted_blocks=1 00:07:33.280 00:07:33.280 ' 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:33.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.280 --rc genhtml_branch_coverage=1 00:07:33.280 --rc genhtml_function_coverage=1 00:07:33.280 --rc genhtml_legend=1 00:07:33.280 --rc geninfo_all_blocks=1 00:07:33.280 --rc geninfo_unexecuted_blocks=1 00:07:33.280 00:07:33.280 ' 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:33.280 ************************************ 00:07:33.280 START TEST dd_invalid_arguments 00:07:33.280 ************************************ 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:33.280 13:48:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:33.280 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:33.280 00:07:33.280 CPU options: 00:07:33.280 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:33.280 (like [0,1,10]) 00:07:33.280 --lcores lcore to CPU mapping list. The list is in the format: 00:07:33.280 [<,lcores[@CPUs]>...] 00:07:33.280 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:33.280 Within the group, '-' is used for range separator, 00:07:33.280 ',' is used for single number separator. 00:07:33.280 '( )' can be omitted for single element group, 00:07:33.280 '@' can be omitted if cpus and lcores have the same value 00:07:33.280 --disable-cpumask-locks Disable CPU core lock files. 00:07:33.280 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:33.280 pollers in the app support interrupt mode) 00:07:33.280 -p, --main-core main (primary) core for DPDK 00:07:33.280 00:07:33.280 Configuration options: 00:07:33.280 -c, --config, --json JSON config file 00:07:33.280 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:33.280 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:33.280 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:33.280 --rpcs-allowed comma-separated list of permitted RPCS 00:07:33.281 --json-ignore-init-errors don't exit on invalid config entry 00:07:33.281 00:07:33.281 Memory options: 00:07:33.281 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:33.281 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:33.281 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:33.281 -R, --huge-unlink unlink huge files after initialization 00:07:33.281 -n, --mem-channels number of memory channels used for DPDK 00:07:33.281 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:33.281 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:33.281 --no-huge run without using hugepages 00:07:33.281 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:07:33.281 -i, --shm-id shared memory ID (optional) 00:07:33.281 -g, --single-file-segments force creating just one hugetlbfs file 00:07:33.281 00:07:33.281 PCI options: 00:07:33.281 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:33.281 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:33.281 -u, --no-pci disable PCI access 00:07:33.281 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:33.281 00:07:33.281 Log options: 00:07:33.281 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:33.281 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:33.281 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:33.281 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:33.281 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:07:33.281 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:07:33.281 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:07:33.281 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:07:33.281 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:07:33.281 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:07:33.281 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:33.281 --silence-noticelog disable notice level logging to stderr 00:07:33.281 00:07:33.281 Trace options: 00:07:33.281 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:33.281 setting 0 to disable trace (default 32768) 00:07:33.281 Tracepoints vary in size and can use more than one trace entry. 00:07:33.281 -e, --tpoint-group [:] 00:07:33.281 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:33.281 [2024-12-11 13:48:26.283780] spdk_dd.c:1478:main: *ERROR*: Invalid arguments 00:07:33.281 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:33.281 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:07:33.281 bdev_raid, scheduler, all). 00:07:33.281 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:33.281 a tracepoint group. First tpoint inside a group can be enabled by 00:07:33.281 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:33.281 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:33.281 in /include/spdk_internal/trace_defs.h 00:07:33.281 00:07:33.281 Other options: 00:07:33.281 -h, --help show this usage 00:07:33.281 -v, --version print SPDK version 00:07:33.281 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:33.281 --env-context Opaque context for use of the env implementation 00:07:33.281 00:07:33.281 Application specific: 00:07:33.281 [--------- DD Options ---------] 00:07:33.281 --if Input file. Must specify either --if or --ib. 00:07:33.281 --ib Input bdev. Must specifier either --if or --ib 00:07:33.281 --of Output file. Must specify either --of or --ob. 00:07:33.281 --ob Output bdev. Must specify either --of or --ob. 00:07:33.281 --iflag Input file flags. 00:07:33.281 --oflag Output file flags. 00:07:33.281 --bs I/O unit size (default: 4096) 00:07:33.281 --qd Queue depth (default: 2) 00:07:33.281 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:33.281 --skip Skip this many I/O units at start of input. (default: 0) 00:07:33.281 --seek Skip this many I/O units at start of output. (default: 0) 00:07:33.281 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:33.281 --sparse Enable hole skipping in input target 00:07:33.281 Available iflag and oflag values: 00:07:33.281 append - append mode 00:07:33.281 direct - use direct I/O for data 00:07:33.281 directory - fail unless a directory 00:07:33.281 dsync - use synchronized I/O for data 00:07:33.281 noatime - do not update access time 00:07:33.281 noctty - do not assign controlling terminal from file 00:07:33.281 nofollow - do not follow symlinks 00:07:33.281 nonblock - use non-blocking I/O 00:07:33.281 sync - use synchronized I/O for data and metadata 00:07:33.281 13:48:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:07:33.281 13:48:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:33.281 13:48:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:33.281 13:48:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:33.281 00:07:33.281 real 0m0.078s 00:07:33.281 user 0m0.045s 00:07:33.281 sys 0m0.032s 00:07:33.281 13:48:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.281 13:48:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:33.281 ************************************ 00:07:33.281 END TEST dd_invalid_arguments 00:07:33.281 ************************************ 00:07:33.539 13:48:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:07:33.539 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.539 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.539 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:33.539 ************************************ 00:07:33.539 START TEST dd_double_input 00:07:33.539 ************************************ 00:07:33.539 13:48:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:07:33.539 13:48:26 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:33.539 13:48:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:07:33.539 13:48:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:33.539 13:48:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.539 13:48:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.539 13:48:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.539 13:48:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.539 13:48:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.539 13:48:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.539 13:48:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.539 13:48:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:33.539 13:48:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:33.539 [2024-12-11 13:48:26.417285] spdk_dd.c:1485:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:33.540 00:07:33.540 real 0m0.079s 00:07:33.540 user 0m0.054s 00:07:33.540 sys 0m0.024s 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.540 ************************************ 00:07:33.540 END TEST dd_double_input 00:07:33.540 ************************************ 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:33.540 ************************************ 00:07:33.540 START TEST dd_double_output 00:07:33.540 ************************************ 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:33.540 [2024-12-11 13:48:26.542432] spdk_dd.c:1491:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:33.540 00:07:33.540 real 0m0.078s 00:07:33.540 user 0m0.048s 00:07:33.540 sys 0m0.028s 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.540 13:48:26 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:33.540 ************************************ 00:07:33.540 END TEST dd_double_output 00:07:33.540 ************************************ 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:33.799 ************************************ 00:07:33.799 START TEST dd_no_input 00:07:33.799 ************************************ 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:33.799 [2024-12-11 13:48:26.675730] spdk_dd.c:1497:main: *ERROR*: You must specify either --if or --ib 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:33.799 00:07:33.799 real 0m0.080s 00:07:33.799 user 0m0.053s 00:07:33.799 sys 0m0.025s 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:33.799 ************************************ 00:07:33.799 END TEST dd_no_input 00:07:33.799 ************************************ 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:33.799 ************************************ 00:07:33.799 START TEST dd_no_output 00:07:33.799 ************************************ 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:33.799 [2024-12-11 13:48:26.811240] spdk_dd.c:1503:main: *ERROR*: You must specify either --of or --ob 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:33.799 00:07:33.799 real 0m0.081s 00:07:33.799 user 0m0.048s 00:07:33.799 sys 0m0.031s 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.799 13:48:26 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:33.799 ************************************ 00:07:33.799 END TEST dd_no_output 00:07:33.799 ************************************ 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:34.058 ************************************ 00:07:34.058 START TEST dd_wrong_blocksize 00:07:34.058 ************************************ 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:34.058 [2024-12-11 13:48:26.935726] spdk_dd.c:1509:main: *ERROR*: Invalid --bs value 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:34.058 00:07:34.058 real 0m0.075s 00:07:34.058 user 0m0.044s 00:07:34.058 sys 0m0.029s 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:34.058 ************************************ 00:07:34.058 END TEST dd_wrong_blocksize 00:07:34.058 ************************************ 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.058 13:48:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:34.058 ************************************ 00:07:34.058 START TEST dd_smaller_blocksize 00:07:34.058 ************************************ 00:07:34.058 13:48:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:07:34.058 13:48:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:34.058 13:48:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:07:34.058 13:48:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:34.058 13:48:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.058 13:48:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.058 13:48:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.058 13:48:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.058 13:48:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.058 13:48:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.058 13:48:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.058 13:48:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:34.058 13:48:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:34.058 [2024-12-11 13:48:27.062200] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:34.058 [2024-12-11 13:48:27.062297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63014 ] 00:07:34.317 [2024-12-11 13:48:27.214838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.317 [2024-12-11 13:48:27.275570] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.317 [2024-12-11 13:48:27.333327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.883 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:34.883 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:35.142 [2024-12-11 13:48:27.939227] spdk_dd.c:1182:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:35.142 [2024-12-11 13:48:27.939311] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.142 [2024-12-11 13:48:28.063052] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:35.142 00:07:35.142 real 0m1.120s 00:07:35.142 user 0m0.415s 00:07:35.142 sys 0m0.596s 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:35.142 ************************************ 00:07:35.142 END TEST dd_smaller_blocksize 00:07:35.142 ************************************ 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:35.142 ************************************ 00:07:35.142 START TEST dd_invalid_count 00:07:35.142 ************************************ 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:35.142 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:35.401 [2024-12-11 13:48:28.236017] spdk_dd.c:1515:main: *ERROR*: Invalid --count value 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:35.401 00:07:35.401 real 0m0.076s 00:07:35.401 user 0m0.047s 00:07:35.401 sys 0m0.027s 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:35.401 ************************************ 00:07:35.401 END TEST dd_invalid_count 00:07:35.401 ************************************ 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:35.401 ************************************ 00:07:35.401 START TEST dd_invalid_oflag 00:07:35.401 ************************************ 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:35.401 [2024-12-11 13:48:28.357535] spdk_dd.c:1521:main: *ERROR*: --oflags may be used only with --of 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:35.401 00:07:35.401 real 0m0.072s 00:07:35.401 user 0m0.048s 00:07:35.401 sys 0m0.022s 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.401 ************************************ 00:07:35.401 END TEST dd_invalid_oflag 00:07:35.401 ************************************ 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:35.401 ************************************ 00:07:35.401 START TEST dd_invalid_iflag 00:07:35.401 ************************************ 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:35.401 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:35.660 [2024-12-11 13:48:28.485915] spdk_dd.c:1527:main: *ERROR*: --iflags may be used only with --if 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:35.660 00:07:35.660 real 0m0.079s 00:07:35.660 user 0m0.050s 00:07:35.660 sys 0m0.028s 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:35.660 ************************************ 00:07:35.660 END TEST dd_invalid_iflag 00:07:35.660 ************************************ 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:35.660 ************************************ 00:07:35.660 START TEST dd_unknown_flag 00:07:35.660 ************************************ 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:35.660 13:48:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:35.660 [2024-12-11 13:48:28.615969] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:35.660 [2024-12-11 13:48:28.616061] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63111 ] 00:07:35.923 [2024-12-11 13:48:28.765991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.923 [2024-12-11 13:48:28.824322] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.923 [2024-12-11 13:48:28.882477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.924 [2024-12-11 13:48:28.924597] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:07:35.924 [2024-12-11 13:48:28.924671] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.924 [2024-12-11 13:48:28.924741] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:07:35.924 [2024-12-11 13:48:28.924758] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.924 [2024-12-11 13:48:28.925025] spdk_dd.c:1216:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:35.924 [2024-12-11 13:48:28.925042] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.924 [2024-12-11 13:48:28.925099] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:35.924 [2024-12-11 13:48:28.925110] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:36.182 [2024-12-11 13:48:29.055177] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:36.182 00:07:36.182 real 0m0.570s 00:07:36.182 user 0m0.310s 00:07:36.182 sys 0m0.169s 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.182 ************************************ 00:07:36.182 END TEST dd_unknown_flag 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:36.182 ************************************ 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:36.182 ************************************ 00:07:36.182 START TEST dd_invalid_json 00:07:36.182 ************************************ 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:36.182 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:36.440 [2024-12-11 13:48:29.232588] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:36.440 [2024-12-11 13:48:29.232679] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63140 ] 00:07:36.440 [2024-12-11 13:48:29.379247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.440 [2024-12-11 13:48:29.440372] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.440 [2024-12-11 13:48:29.440460] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:36.440 [2024-12-11 13:48:29.440476] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:36.440 [2024-12-11 13:48:29.440485] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.440 [2024-12-11 13:48:29.440525] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:36.699 00:07:36.699 real 0m0.346s 00:07:36.699 user 0m0.183s 00:07:36.699 sys 0m0.062s 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:36.699 ************************************ 00:07:36.699 END TEST dd_invalid_json 00:07:36.699 ************************************ 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:36.699 ************************************ 00:07:36.699 START TEST dd_invalid_seek 00:07:36.699 ************************************ 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:36.699 13:48:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:36.699 { 00:07:36.699 "subsystems": [ 00:07:36.699 { 00:07:36.699 "subsystem": "bdev", 00:07:36.699 "config": [ 00:07:36.699 { 00:07:36.699 "params": { 00:07:36.699 "block_size": 512, 00:07:36.699 "num_blocks": 512, 00:07:36.699 "name": "malloc0" 00:07:36.699 }, 00:07:36.699 "method": "bdev_malloc_create" 00:07:36.699 }, 00:07:36.699 { 00:07:36.699 "params": { 00:07:36.699 "block_size": 512, 00:07:36.699 "num_blocks": 512, 00:07:36.699 "name": "malloc1" 00:07:36.699 }, 00:07:36.699 "method": "bdev_malloc_create" 00:07:36.699 }, 00:07:36.699 { 00:07:36.699 "method": "bdev_wait_for_examine" 00:07:36.699 } 00:07:36.699 ] 00:07:36.699 } 00:07:36.699 ] 00:07:36.699 } 00:07:36.699 [2024-12-11 13:48:29.631573] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:36.699 [2024-12-11 13:48:29.632267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63169 ] 00:07:36.957 [2024-12-11 13:48:29.783729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.957 [2024-12-11 13:48:29.849091] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.957 [2024-12-11 13:48:29.904867] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.957 [2024-12-11 13:48:29.968150] spdk_dd.c:1143:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:07:36.957 [2024-12-11 13:48:29.968244] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.216 [2024-12-11 13:48:30.091966] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.216 00:07:37.216 real 0m0.589s 00:07:37.216 user 0m0.383s 00:07:37.216 sys 0m0.162s 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.216 ************************************ 00:07:37.216 END TEST dd_invalid_seek 00:07:37.216 ************************************ 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:37.216 ************************************ 00:07:37.216 START TEST dd_invalid_skip 00:07:37.216 ************************************ 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:37.216 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:37.475 [2024-12-11 13:48:30.262547] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:37.475 [2024-12-11 13:48:30.262678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63203 ] 00:07:37.475 { 00:07:37.475 "subsystems": [ 00:07:37.475 { 00:07:37.475 "subsystem": "bdev", 00:07:37.475 "config": [ 00:07:37.475 { 00:07:37.475 "params": { 00:07:37.475 "block_size": 512, 00:07:37.475 "num_blocks": 512, 00:07:37.475 "name": "malloc0" 00:07:37.475 }, 00:07:37.475 "method": "bdev_malloc_create" 00:07:37.475 }, 00:07:37.475 { 00:07:37.475 "params": { 00:07:37.475 "block_size": 512, 00:07:37.475 "num_blocks": 512, 00:07:37.475 "name": "malloc1" 00:07:37.475 }, 00:07:37.475 "method": "bdev_malloc_create" 00:07:37.475 }, 00:07:37.475 { 00:07:37.475 "method": "bdev_wait_for_examine" 00:07:37.475 } 00:07:37.475 ] 00:07:37.475 } 00:07:37.475 ] 00:07:37.475 } 00:07:37.475 [2024-12-11 13:48:30.404635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.475 [2024-12-11 13:48:30.453894] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.475 [2024-12-11 13:48:30.509291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.733 [2024-12-11 13:48:30.574432] spdk_dd.c:1100:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:07:37.733 [2024-12-11 13:48:30.574530] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.733 [2024-12-11 13:48:30.692827] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:37.733 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:07:37.733 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.733 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:07:37.733 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:07:37.733 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:07:37.733 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.733 00:07:37.733 real 0m0.547s 00:07:37.733 user 0m0.352s 00:07:37.733 sys 0m0.157s 00:07:37.733 ************************************ 00:07:37.733 END TEST dd_invalid_skip 00:07:37.733 ************************************ 00:07:37.733 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.733 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:37.992 ************************************ 00:07:37.992 START TEST dd_invalid_input_count 00:07:37.992 ************************************ 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:37.992 13:48:30 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:37.992 { 00:07:37.992 "subsystems": [ 00:07:37.992 { 00:07:37.992 "subsystem": "bdev", 00:07:37.992 "config": [ 00:07:37.992 { 00:07:37.992 "params": { 00:07:37.992 "block_size": 512, 00:07:37.992 "num_blocks": 512, 00:07:37.992 "name": "malloc0" 00:07:37.992 }, 00:07:37.992 "method": "bdev_malloc_create" 00:07:37.992 }, 00:07:37.992 { 00:07:37.992 "params": { 00:07:37.992 "block_size": 512, 00:07:37.992 "num_blocks": 512, 00:07:37.992 "name": "malloc1" 00:07:37.992 }, 00:07:37.992 "method": "bdev_malloc_create" 00:07:37.992 }, 00:07:37.992 { 00:07:37.992 "method": "bdev_wait_for_examine" 00:07:37.992 } 00:07:37.992 ] 00:07:37.992 } 00:07:37.992 ] 00:07:37.992 } 00:07:37.992 [2024-12-11 13:48:30.869829] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:37.992 [2024-12-11 13:48:30.869927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63236 ] 00:07:37.992 [2024-12-11 13:48:31.018888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.250 [2024-12-11 13:48:31.076052] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.250 [2024-12-11 13:48:31.129917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.250 [2024-12-11 13:48:31.192751] spdk_dd.c:1108:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:07:38.250 [2024-12-11 13:48:31.192841] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.508 [2024-12-11 13:48:31.309515] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:38.508 00:07:38.508 real 0m0.566s 00:07:38.508 user 0m0.370s 00:07:38.508 sys 0m0.153s 00:07:38.508 ************************************ 00:07:38.508 END TEST dd_invalid_input_count 00:07:38.508 ************************************ 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:38.508 ************************************ 00:07:38.508 START TEST dd_invalid_output_count 00:07:38.508 ************************************ 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:38.508 13:48:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:38.508 { 00:07:38.508 "subsystems": [ 00:07:38.508 { 00:07:38.508 "subsystem": "bdev", 00:07:38.508 "config": [ 00:07:38.508 { 00:07:38.508 "params": { 00:07:38.508 "block_size": 512, 00:07:38.508 "num_blocks": 512, 00:07:38.508 "name": "malloc0" 00:07:38.508 }, 00:07:38.508 "method": "bdev_malloc_create" 00:07:38.508 }, 00:07:38.508 { 00:07:38.508 "method": "bdev_wait_for_examine" 00:07:38.508 } 00:07:38.508 ] 00:07:38.508 } 00:07:38.508 ] 00:07:38.508 } 00:07:38.508 [2024-12-11 13:48:31.504539] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:38.508 [2024-12-11 13:48:31.504630] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63275 ] 00:07:38.765 [2024-12-11 13:48:31.657755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.765 [2024-12-11 13:48:31.713000] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.765 [2024-12-11 13:48:31.770199] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.023 [2024-12-11 13:48:31.827422] spdk_dd.c:1150:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:07:39.023 [2024-12-11 13:48:31.827517] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:39.023 [2024-12-11 13:48:31.949408] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.023 00:07:39.023 real 0m0.592s 00:07:39.023 user 0m0.377s 00:07:39.023 sys 0m0.183s 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.023 ************************************ 00:07:39.023 END TEST dd_invalid_output_count 00:07:39.023 ************************************ 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:39.023 ************************************ 00:07:39.023 START TEST dd_bs_not_multiple 00:07:39.023 ************************************ 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:39.023 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:07:39.282 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.282 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:39.282 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.282 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.282 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.282 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.282 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.282 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.282 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:39.282 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:39.282 [2024-12-11 13:48:32.118235] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:39.282 [2024-12-11 13:48:32.118319] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63307 ] 00:07:39.282 { 00:07:39.282 "subsystems": [ 00:07:39.282 { 00:07:39.282 "subsystem": "bdev", 00:07:39.282 "config": [ 00:07:39.282 { 00:07:39.282 "params": { 00:07:39.282 "block_size": 512, 00:07:39.282 "num_blocks": 512, 00:07:39.282 "name": "malloc0" 00:07:39.282 }, 00:07:39.282 "method": "bdev_malloc_create" 00:07:39.282 }, 00:07:39.282 { 00:07:39.282 "params": { 00:07:39.282 "block_size": 512, 00:07:39.282 "num_blocks": 512, 00:07:39.282 "name": "malloc1" 00:07:39.282 }, 00:07:39.282 "method": "bdev_malloc_create" 00:07:39.282 }, 00:07:39.282 { 00:07:39.282 "method": "bdev_wait_for_examine" 00:07:39.282 } 00:07:39.282 ] 00:07:39.282 } 00:07:39.282 ] 00:07:39.282 } 00:07:39.282 [2024-12-11 13:48:32.262251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.282 [2024-12-11 13:48:32.314618] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.541 [2024-12-11 13:48:32.370928] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.541 [2024-12-11 13:48:32.436237] spdk_dd.c:1166:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:07:39.541 [2024-12-11 13:48:32.436311] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:39.541 [2024-12-11 13:48:32.557767] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:39.799 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:07:39.799 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.799 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:07:39.799 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:07:39.799 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:07:39.799 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.799 00:07:39.799 real 0m0.554s 00:07:39.799 user 0m0.375s 00:07:39.799 sys 0m0.149s 00:07:39.799 ************************************ 00:07:39.799 END TEST dd_bs_not_multiple 00:07:39.799 ************************************ 00:07:39.799 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.799 13:48:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:39.799 00:07:39.799 real 0m6.637s 00:07:39.799 user 0m3.591s 00:07:39.799 sys 0m2.489s 00:07:39.799 13:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.799 13:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:39.799 ************************************ 00:07:39.799 END TEST spdk_dd_negative 00:07:39.799 ************************************ 00:07:39.799 ************************************ 00:07:39.799 END TEST spdk_dd 00:07:39.799 ************************************ 00:07:39.799 00:07:39.799 real 1m19.746s 00:07:39.799 user 0m50.895s 00:07:39.799 sys 0m35.656s 00:07:39.799 13:48:32 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.799 13:48:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:39.799 13:48:32 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:39.799 13:48:32 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:39.799 13:48:32 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:39.799 13:48:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:39.799 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:07:39.799 13:48:32 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:39.799 13:48:32 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:39.799 13:48:32 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:39.799 13:48:32 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:39.799 13:48:32 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:39.799 13:48:32 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:39.799 13:48:32 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:39.799 13:48:32 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:39.799 13:48:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.799 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:07:39.799 ************************************ 00:07:39.799 START TEST nvmf_tcp 00:07:39.799 ************************************ 00:07:39.799 13:48:32 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:40.057 * Looking for test storage... 00:07:40.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:40.057 13:48:32 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:40.057 13:48:32 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:40.057 13:48:32 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:40.057 13:48:32 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.057 13:48:32 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:40.057 13:48:32 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.057 13:48:32 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:40.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.057 --rc genhtml_branch_coverage=1 00:07:40.057 --rc genhtml_function_coverage=1 00:07:40.057 --rc genhtml_legend=1 00:07:40.057 --rc geninfo_all_blocks=1 00:07:40.057 --rc geninfo_unexecuted_blocks=1 00:07:40.057 00:07:40.057 ' 00:07:40.057 13:48:32 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:40.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.057 --rc genhtml_branch_coverage=1 00:07:40.057 --rc genhtml_function_coverage=1 00:07:40.057 --rc genhtml_legend=1 00:07:40.057 --rc geninfo_all_blocks=1 00:07:40.057 --rc geninfo_unexecuted_blocks=1 00:07:40.057 00:07:40.057 ' 00:07:40.057 13:48:32 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:40.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.057 --rc genhtml_branch_coverage=1 00:07:40.057 --rc genhtml_function_coverage=1 00:07:40.057 --rc genhtml_legend=1 00:07:40.057 --rc geninfo_all_blocks=1 00:07:40.057 --rc geninfo_unexecuted_blocks=1 00:07:40.057 00:07:40.057 ' 00:07:40.057 13:48:32 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:40.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.057 --rc genhtml_branch_coverage=1 00:07:40.057 --rc genhtml_function_coverage=1 00:07:40.057 --rc genhtml_legend=1 00:07:40.057 --rc geninfo_all_blocks=1 00:07:40.057 --rc geninfo_unexecuted_blocks=1 00:07:40.057 00:07:40.057 ' 00:07:40.057 13:48:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:40.057 13:48:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:40.057 13:48:32 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:40.057 13:48:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:40.057 13:48:32 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.057 13:48:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:40.057 ************************************ 00:07:40.057 START TEST nvmf_target_core 00:07:40.057 ************************************ 00:07:40.057 13:48:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:40.057 * Looking for test storage... 00:07:40.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:40.057 13:48:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:40.057 13:48:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:40.058 13:48:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:40.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.316 --rc genhtml_branch_coverage=1 00:07:40.316 --rc genhtml_function_coverage=1 00:07:40.316 --rc genhtml_legend=1 00:07:40.316 --rc geninfo_all_blocks=1 00:07:40.316 --rc geninfo_unexecuted_blocks=1 00:07:40.316 00:07:40.316 ' 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:40.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.316 --rc genhtml_branch_coverage=1 00:07:40.316 --rc genhtml_function_coverage=1 00:07:40.316 --rc genhtml_legend=1 00:07:40.316 --rc geninfo_all_blocks=1 00:07:40.316 --rc geninfo_unexecuted_blocks=1 00:07:40.316 00:07:40.316 ' 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:40.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.316 --rc genhtml_branch_coverage=1 00:07:40.316 --rc genhtml_function_coverage=1 00:07:40.316 --rc genhtml_legend=1 00:07:40.316 --rc geninfo_all_blocks=1 00:07:40.316 --rc geninfo_unexecuted_blocks=1 00:07:40.316 00:07:40.316 ' 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:40.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.316 --rc genhtml_branch_coverage=1 00:07:40.316 --rc genhtml_function_coverage=1 00:07:40.316 --rc genhtml_legend=1 00:07:40.316 --rc geninfo_all_blocks=1 00:07:40.316 --rc geninfo_unexecuted_blocks=1 00:07:40.316 00:07:40.316 ' 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.316 13:48:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:40.317 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:40.317 ************************************ 00:07:40.317 START TEST nvmf_host_management 00:07:40.317 ************************************ 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:40.317 * Looking for test storage... 00:07:40.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:40.317 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:40.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.576 --rc genhtml_branch_coverage=1 00:07:40.576 --rc genhtml_function_coverage=1 00:07:40.576 --rc genhtml_legend=1 00:07:40.576 --rc geninfo_all_blocks=1 00:07:40.576 --rc geninfo_unexecuted_blocks=1 00:07:40.576 00:07:40.576 ' 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:40.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.576 --rc genhtml_branch_coverage=1 00:07:40.576 --rc genhtml_function_coverage=1 00:07:40.576 --rc genhtml_legend=1 00:07:40.576 --rc geninfo_all_blocks=1 00:07:40.576 --rc geninfo_unexecuted_blocks=1 00:07:40.576 00:07:40.576 ' 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:40.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.576 --rc genhtml_branch_coverage=1 00:07:40.576 --rc genhtml_function_coverage=1 00:07:40.576 --rc genhtml_legend=1 00:07:40.576 --rc geninfo_all_blocks=1 00:07:40.576 --rc geninfo_unexecuted_blocks=1 00:07:40.576 00:07:40.576 ' 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:40.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.576 --rc genhtml_branch_coverage=1 00:07:40.576 --rc genhtml_function_coverage=1 00:07:40.576 --rc genhtml_legend=1 00:07:40.576 --rc geninfo_all_blocks=1 00:07:40.576 --rc geninfo_unexecuted_blocks=1 00:07:40.576 00:07:40.576 ' 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:40.576 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:40.576 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:40.577 Cannot find device "nvmf_init_br" 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:40.577 Cannot find device "nvmf_init_br2" 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:40.577 Cannot find device "nvmf_tgt_br" 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:40.577 Cannot find device "nvmf_tgt_br2" 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:40.577 Cannot find device "nvmf_init_br" 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:40.577 Cannot find device "nvmf_init_br2" 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:40.577 Cannot find device "nvmf_tgt_br" 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:40.577 Cannot find device "nvmf_tgt_br2" 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:40.577 Cannot find device "nvmf_br" 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:40.577 Cannot find device "nvmf_init_if" 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:40.577 Cannot find device "nvmf_init_if2" 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:40.577 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:40.577 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:40.577 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:40.835 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:41.093 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:41.093 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:07:41.093 00:07:41.093 --- 10.0.0.3 ping statistics --- 00:07:41.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.093 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:41.093 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:41.093 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:07:41.093 00:07:41.093 --- 10.0.0.4 ping statistics --- 00:07:41.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.093 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:41.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:07:41.093 00:07:41.093 --- 10.0.0.1 ping statistics --- 00:07:41.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.093 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:41.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:07:41.093 00:07:41.093 --- 10.0.0.2 ping statistics --- 00:07:41.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.093 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=63656 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 63656 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 63656 ']' 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.093 13:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.093 [2024-12-11 13:48:33.997392] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:41.093 [2024-12-11 13:48:33.997494] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.351 [2024-12-11 13:48:34.151018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.351 [2024-12-11 13:48:34.218360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.351 [2024-12-11 13:48:34.218440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.351 [2024-12-11 13:48:34.218455] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.351 [2024-12-11 13:48:34.218466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.351 [2024-12-11 13:48:34.218475] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.351 [2024-12-11 13:48:34.219749] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.351 [2024-12-11 13:48:34.219887] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.351 [2024-12-11 13:48:34.220032] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:41.351 [2024-12-11 13:48:34.220039] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.351 [2024-12-11 13:48:34.279331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:42.284 [2024-12-11 13:48:35.098027] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:42.284 Malloc0 00:07:42.284 [2024-12-11 13:48:35.176854] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:42.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=63710 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 63710 /var/tmp/bdevperf.sock 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 63710 ']' 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:42.284 { 00:07:42.284 "params": { 00:07:42.284 "name": "Nvme$subsystem", 00:07:42.284 "trtype": "$TEST_TRANSPORT", 00:07:42.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:42.284 "adrfam": "ipv4", 00:07:42.284 "trsvcid": "$NVMF_PORT", 00:07:42.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:42.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:42.284 "hdgst": ${hdgst:-false}, 00:07:42.284 "ddgst": ${ddgst:-false} 00:07:42.284 }, 00:07:42.284 "method": "bdev_nvme_attach_controller" 00:07:42.284 } 00:07:42.284 EOF 00:07:42.284 )") 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:42.284 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:42.284 "params": { 00:07:42.284 "name": "Nvme0", 00:07:42.284 "trtype": "tcp", 00:07:42.284 "traddr": "10.0.0.3", 00:07:42.284 "adrfam": "ipv4", 00:07:42.284 "trsvcid": "4420", 00:07:42.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:42.284 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:42.284 "hdgst": false, 00:07:42.284 "ddgst": false 00:07:42.284 }, 00:07:42.284 "method": "bdev_nvme_attach_controller" 00:07:42.284 }' 00:07:42.284 [2024-12-11 13:48:35.283124] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:42.284 [2024-12-11 13:48:35.283621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63710 ] 00:07:42.541 [2024-12-11 13:48:35.435765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.541 [2024-12-11 13:48:35.492814] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.541 [2024-12-11 13:48:35.559671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.798 Running I/O for 10 seconds... 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:42.798 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:43.056 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:43.056 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:43.056 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:43.056 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:43.056 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.056 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:43.056 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.316 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:07:43.316 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:07:43.316 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:43.316 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:43.316 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:43.316 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:43.316 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.316 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:43.316 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.316 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:43.316 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.316 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:43.316 [2024-12-11 13:48:36.125609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.316 [2024-12-11 13:48:36.125654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.316 [2024-12-11 13:48:36.125677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.316 [2024-12-11 13:48:36.125688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.316 [2024-12-11 13:48:36.125711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.316 [2024-12-11 13:48:36.125723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.316 [2024-12-11 13:48:36.125734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.316 [2024-12-11 13:48:36.125744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.316 [2024-12-11 13:48:36.125755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.316 [2024-12-11 13:48:36.125765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.316 [2024-12-11 13:48:36.125776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.316 [2024-12-11 13:48:36.125785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.316 [2024-12-11 13:48:36.125796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.316 [2024-12-11 13:48:36.125806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.316 [2024-12-11 13:48:36.125817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.316 [2024-12-11 13:48:36.125826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.316 [2024-12-11 13:48:36.125837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.316 [2024-12-11 13:48:36.125846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.316 [2024-12-11 13:48:36.125858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.316 [2024-12-11 13:48:36.125867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.316 [2024-12-11 13:48:36.125878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.316 [2024-12-11 13:48:36.125887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.316 [2024-12-11 13:48:36.125899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.316 [2024-12-11 13:48:36.125908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.125919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.125928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.125943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.125952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.125963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.125972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.125983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.317 [2024-12-11 13:48:36.126790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.317 [2024-12-11 13:48:36.126801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.318 [2024-12-11 13:48:36.126810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.318 [2024-12-11 13:48:36.126821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.318 [2024-12-11 13:48:36.126830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.318 [2024-12-11 13:48:36.126841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.318 [2024-12-11 13:48:36.126850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.318 [2024-12-11 13:48:36.126861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.318 [2024-12-11 13:48:36.126870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.318 [2024-12-11 13:48:36.126881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.318 [2024-12-11 13:48:36.126890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.318 [2024-12-11 13:48:36.126901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.318 [2024-12-11 13:48:36.126911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.318 [2024-12-11 13:48:36.126922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.318 [2024-12-11 13:48:36.126932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.318 [2024-12-11 13:48:36.126943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.318 [2024-12-11 13:48:36.126952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.318 [2024-12-11 13:48:36.126963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.318 [2024-12-11 13:48:36.126972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.318 [2024-12-11 13:48:36.126983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.318 [2024-12-11 13:48:36.126992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.318 [2024-12-11 13:48:36.127003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.318 [2024-12-11 13:48:36.127013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.318 [2024-12-11 13:48:36.127024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.318 [2024-12-11 13:48:36.127046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.318 [2024-12-11 13:48:36.127058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be2e30 is same with the state(6) to be set 00:07:43.318 [2024-12-11 13:48:36.127232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:43.318 [2024-12-11 13:48:36.127249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.318 [2024-12-11 13:48:36.127260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:43.318 [2024-12-11 13:48:36.127270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.318 [2024-12-11 13:48:36.127279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:43.318 [2024-12-11 13:48:36.127288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.318 [2024-12-11 13:48:36.127298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:43.318 [2024-12-11 13:48:36.127307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:43.318 [2024-12-11 13:48:36.127316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bde9d0 is same with the state(6) to be set 00:07:43.318 [2024-12-11 13:48:36.128390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:43.318 task offset: 81920 on job bdev=Nvme0n1 fails 00:07:43.318 00:07:43.318 Latency(us) 00:07:43.318 [2024-12-11T13:48:36.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.318 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:43.318 Job: Nvme0n1 ended in about 0.44 seconds with error 00:07:43.318 Verification LBA range: start 0x0 length 0x400 00:07:43.318 Nvme0n1 : 0.44 1440.08 90.00 144.01 0.00 38911.44 2115.03 39321.60 00:07:43.318 [2024-12-11T13:48:36.365Z] =================================================================================================================== 00:07:43.318 [2024-12-11T13:48:36.365Z] Total : 1440.08 90.00 144.01 0.00 38911.44 2115.03 39321.60 00:07:43.318 [2024-12-11 13:48:36.130286] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:43.318 [2024-12-11 13:48:36.130307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bde9d0 (9): Bad file descriptor 00:07:43.318 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.318 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:43.318 [2024-12-11 13:48:36.139617] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:44.252 13:48:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 63710 00:07:44.252 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (63710) - No such process 00:07:44.252 13:48:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:44.252 13:48:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:44.252 13:48:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:44.252 13:48:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:44.252 13:48:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:44.252 13:48:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:44.252 13:48:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:44.253 13:48:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:44.253 { 00:07:44.253 "params": { 00:07:44.253 "name": "Nvme$subsystem", 00:07:44.253 "trtype": "$TEST_TRANSPORT", 00:07:44.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:44.253 "adrfam": "ipv4", 00:07:44.253 "trsvcid": "$NVMF_PORT", 00:07:44.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:44.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:44.253 "hdgst": ${hdgst:-false}, 00:07:44.253 "ddgst": ${ddgst:-false} 00:07:44.253 }, 00:07:44.253 "method": "bdev_nvme_attach_controller" 00:07:44.253 } 00:07:44.253 EOF 00:07:44.253 )") 00:07:44.253 13:48:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:44.253 13:48:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:44.253 13:48:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:44.253 13:48:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:44.253 "params": { 00:07:44.253 "name": "Nvme0", 00:07:44.253 "trtype": "tcp", 00:07:44.253 "traddr": "10.0.0.3", 00:07:44.253 "adrfam": "ipv4", 00:07:44.253 "trsvcid": "4420", 00:07:44.253 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:44.253 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:44.253 "hdgst": false, 00:07:44.253 "ddgst": false 00:07:44.253 }, 00:07:44.253 "method": "bdev_nvme_attach_controller" 00:07:44.253 }' 00:07:44.253 [2024-12-11 13:48:37.186800] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:44.253 [2024-12-11 13:48:37.186880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63752 ] 00:07:44.511 [2024-12-11 13:48:37.330927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.511 [2024-12-11 13:48:37.382873] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.511 [2024-12-11 13:48:37.446224] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.769 Running I/O for 1 seconds... 00:07:45.703 1478.00 IOPS, 92.38 MiB/s 00:07:45.703 Latency(us) 00:07:45.703 [2024-12-11T13:48:38.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.703 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:45.703 Verification LBA range: start 0x0 length 0x400 00:07:45.703 Nvme0n1 : 1.04 1533.14 95.82 0.00 0.00 40938.02 4498.15 37891.72 00:07:45.703 [2024-12-11T13:48:38.750Z] =================================================================================================================== 00:07:45.703 [2024-12-11T13:48:38.750Z] Total : 1533.14 95.82 0.00 0.00 40938.02 4498.15 37891.72 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:45.961 rmmod nvme_tcp 00:07:45.961 rmmod nvme_fabrics 00:07:45.961 rmmod nvme_keyring 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 63656 ']' 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 63656 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 63656 ']' 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 63656 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63656 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:45.961 killing process with pid 63656 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63656' 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 63656 00:07:45.961 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 63656 00:07:46.220 [2024-12-11 13:48:39.152797] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:46.220 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:46.220 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:46.220 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:46.220 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:46.220 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:46.220 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:46.220 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:46.220 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:46.220 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:46.220 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:46.220 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:46.220 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:46.220 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:46.220 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:46.220 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:46.220 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:46.220 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:46.478 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:46.478 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:46.478 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:46.478 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:46.478 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:46.478 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:46.478 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.478 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.478 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.478 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:46.478 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:46.478 00:07:46.478 real 0m6.203s 00:07:46.478 user 0m22.354s 00:07:46.478 sys 0m1.585s 00:07:46.478 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.478 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.478 ************************************ 00:07:46.478 END TEST nvmf_host_management 00:07:46.478 ************************************ 00:07:46.478 13:48:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:46.478 13:48:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:46.478 13:48:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.478 13:48:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:46.478 ************************************ 00:07:46.478 START TEST nvmf_lvol 00:07:46.478 ************************************ 00:07:46.478 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:46.737 * Looking for test storage... 00:07:46.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:46.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.737 --rc genhtml_branch_coverage=1 00:07:46.737 --rc genhtml_function_coverage=1 00:07:46.737 --rc genhtml_legend=1 00:07:46.737 --rc geninfo_all_blocks=1 00:07:46.737 --rc geninfo_unexecuted_blocks=1 00:07:46.737 00:07:46.737 ' 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:46.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.737 --rc genhtml_branch_coverage=1 00:07:46.737 --rc genhtml_function_coverage=1 00:07:46.737 --rc genhtml_legend=1 00:07:46.737 --rc geninfo_all_blocks=1 00:07:46.737 --rc geninfo_unexecuted_blocks=1 00:07:46.737 00:07:46.737 ' 00:07:46.737 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:46.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.737 --rc genhtml_branch_coverage=1 00:07:46.737 --rc genhtml_function_coverage=1 00:07:46.738 --rc genhtml_legend=1 00:07:46.738 --rc geninfo_all_blocks=1 00:07:46.738 --rc geninfo_unexecuted_blocks=1 00:07:46.738 00:07:46.738 ' 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:46.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.738 --rc genhtml_branch_coverage=1 00:07:46.738 --rc genhtml_function_coverage=1 00:07:46.738 --rc genhtml_legend=1 00:07:46.738 --rc geninfo_all_blocks=1 00:07:46.738 --rc geninfo_unexecuted_blocks=1 00:07:46.738 00:07:46.738 ' 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:46.738 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:46.738 Cannot find device "nvmf_init_br" 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:46.738 Cannot find device "nvmf_init_br2" 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:46.738 Cannot find device "nvmf_tgt_br" 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:46.738 Cannot find device "nvmf_tgt_br2" 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:46.738 Cannot find device "nvmf_init_br" 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:46.738 Cannot find device "nvmf_init_br2" 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:46.738 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:46.738 Cannot find device "nvmf_tgt_br" 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:46.997 Cannot find device "nvmf_tgt_br2" 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:46.997 Cannot find device "nvmf_br" 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:46.997 Cannot find device "nvmf_init_if" 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:46.997 Cannot find device "nvmf_init_if2" 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:46.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:46.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:46.997 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:46.997 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:46.997 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:46.998 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:46.998 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:46.998 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:46.998 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:46.998 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:46.998 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:46.998 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:47.256 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:47.256 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:07:47.256 00:07:47.256 --- 10.0.0.3 ping statistics --- 00:07:47.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.256 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:47.256 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:47.256 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:07:47.256 00:07:47.256 --- 10.0.0.4 ping statistics --- 00:07:47.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.256 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:47.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:07:47.256 00:07:47.256 --- 10.0.0.1 ping statistics --- 00:07:47.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.256 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:47.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:07:47.256 00:07:47.256 --- 10.0.0.2 ping statistics --- 00:07:47.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.256 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=64020 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 64020 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 64020 ']' 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.256 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:47.256 [2024-12-11 13:48:40.143473] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:47.256 [2024-12-11 13:48:40.143547] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.256 [2024-12-11 13:48:40.289523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.514 [2024-12-11 13:48:40.345593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.514 [2024-12-11 13:48:40.345652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.514 [2024-12-11 13:48:40.345665] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.515 [2024-12-11 13:48:40.345675] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.515 [2024-12-11 13:48:40.345682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.515 [2024-12-11 13:48:40.346801] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.515 [2024-12-11 13:48:40.347557] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.515 [2024-12-11 13:48:40.347593] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.515 [2024-12-11 13:48:40.400526] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.515 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.515 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:47.515 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:47.515 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:47.515 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:47.515 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.515 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:47.773 [2024-12-11 13:48:40.797922] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.032 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:48.291 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:48.291 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:48.861 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:48.861 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:49.119 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:49.378 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4cd6e36c-fbbc-44fb-b6d8-f18832efc608 00:07:49.378 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4cd6e36c-fbbc-44fb-b6d8-f18832efc608 lvol 20 00:07:49.636 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0a17e437-5cf6-4cb2-be11-66120baf5203 00:07:49.636 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:49.894 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0a17e437-5cf6-4cb2-be11-66120baf5203 00:07:50.152 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:50.410 [2024-12-11 13:48:43.210389] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:50.410 13:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:50.669 13:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=64088 00:07:50.669 13:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:50.669 13:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:51.603 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 0a17e437-5cf6-4cb2-be11-66120baf5203 MY_SNAPSHOT 00:07:52.169 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=162c638e-63f4-406f-8b15-28ec80a3a19b 00:07:52.169 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 0a17e437-5cf6-4cb2-be11-66120baf5203 30 00:07:52.427 13:48:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 162c638e-63f4-406f-8b15-28ec80a3a19b MY_CLONE 00:07:52.685 13:48:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e983b4a7-62da-474e-80e2-7e52e5fc8026 00:07:52.686 13:48:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate e983b4a7-62da-474e-80e2-7e52e5fc8026 00:07:53.251 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 64088 00:08:01.391 Initializing NVMe Controllers 00:08:01.391 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:01.391 Controller IO queue size 128, less than required. 00:08:01.391 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:01.391 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:01.391 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:01.391 Initialization complete. Launching workers. 00:08:01.391 ======================================================== 00:08:01.391 Latency(us) 00:08:01.391 Device Information : IOPS MiB/s Average min max 00:08:01.391 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10488.10 40.97 12204.26 2033.41 58451.80 00:08:01.391 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10537.20 41.16 12147.92 3576.32 65302.32 00:08:01.391 ======================================================== 00:08:01.391 Total : 21025.30 82.13 12176.02 2033.41 65302.32 00:08:01.391 00:08:01.391 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:01.391 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0a17e437-5cf6-4cb2-be11-66120baf5203 00:08:01.652 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4cd6e36c-fbbc-44fb-b6d8-f18832efc608 00:08:01.652 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:01.652 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:01.652 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:01.652 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:01.911 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:01.911 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:01.911 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:01.911 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:01.911 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:01.911 rmmod nvme_tcp 00:08:01.911 rmmod nvme_fabrics 00:08:01.911 rmmod nvme_keyring 00:08:01.911 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:01.911 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:01.911 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:01.911 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 64020 ']' 00:08:01.911 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 64020 00:08:01.911 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 64020 ']' 00:08:01.911 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 64020 00:08:01.911 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:01.911 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.911 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64020 00:08:01.911 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.911 killing process with pid 64020 00:08:01.911 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.911 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64020' 00:08:01.911 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 64020 00:08:01.911 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 64020 00:08:02.171 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:02.171 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:02.171 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:02.171 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:02.171 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:02.171 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:02.171 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:02.171 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:02.171 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:02.171 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:02.171 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:02.171 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:02.171 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:02.171 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:02.171 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:02.171 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:02.171 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:02.171 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:02.430 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:02.430 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:02.430 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:02.430 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:02.430 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:02.430 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.430 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.430 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.430 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:02.430 00:08:02.430 real 0m15.851s 00:08:02.430 user 1m5.533s 00:08:02.430 sys 0m4.157s 00:08:02.430 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.430 ************************************ 00:08:02.430 END TEST nvmf_lvol 00:08:02.430 ************************************ 00:08:02.430 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:02.430 13:48:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:02.430 13:48:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:02.430 13:48:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.430 13:48:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:02.430 ************************************ 00:08:02.430 START TEST nvmf_lvs_grow 00:08:02.430 ************************************ 00:08:02.430 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:02.430 * Looking for test storage... 00:08:02.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:02.430 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:02.430 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:02.430 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:02.689 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:02.689 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.689 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.689 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.689 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.689 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.689 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.689 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.689 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:02.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.690 --rc genhtml_branch_coverage=1 00:08:02.690 --rc genhtml_function_coverage=1 00:08:02.690 --rc genhtml_legend=1 00:08:02.690 --rc geninfo_all_blocks=1 00:08:02.690 --rc geninfo_unexecuted_blocks=1 00:08:02.690 00:08:02.690 ' 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:02.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.690 --rc genhtml_branch_coverage=1 00:08:02.690 --rc genhtml_function_coverage=1 00:08:02.690 --rc genhtml_legend=1 00:08:02.690 --rc geninfo_all_blocks=1 00:08:02.690 --rc geninfo_unexecuted_blocks=1 00:08:02.690 00:08:02.690 ' 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:02.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.690 --rc genhtml_branch_coverage=1 00:08:02.690 --rc genhtml_function_coverage=1 00:08:02.690 --rc genhtml_legend=1 00:08:02.690 --rc geninfo_all_blocks=1 00:08:02.690 --rc geninfo_unexecuted_blocks=1 00:08:02.690 00:08:02.690 ' 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:02.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.690 --rc genhtml_branch_coverage=1 00:08:02.690 --rc genhtml_function_coverage=1 00:08:02.690 --rc genhtml_legend=1 00:08:02.690 --rc geninfo_all_blocks=1 00:08:02.690 --rc geninfo_unexecuted_blocks=1 00:08:02.690 00:08:02.690 ' 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:02.690 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:02.690 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:02.691 Cannot find device "nvmf_init_br" 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:02.691 Cannot find device "nvmf_init_br2" 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:02.691 Cannot find device "nvmf_tgt_br" 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:02.691 Cannot find device "nvmf_tgt_br2" 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:02.691 Cannot find device "nvmf_init_br" 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:02.691 Cannot find device "nvmf_init_br2" 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:02.691 Cannot find device "nvmf_tgt_br" 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:02.691 Cannot find device "nvmf_tgt_br2" 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:02.691 Cannot find device "nvmf_br" 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:02.691 Cannot find device "nvmf_init_if" 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:02.691 Cannot find device "nvmf_init_if2" 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:02.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:02.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:02.691 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:02.950 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:02.950 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:08:02.950 00:08:02.950 --- 10.0.0.3 ping statistics --- 00:08:02.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.950 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:02.950 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:02.950 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:08:02.950 00:08:02.950 --- 10.0.0.4 ping statistics --- 00:08:02.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.950 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:02.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:08:02.950 00:08:02.950 --- 10.0.0.1 ping statistics --- 00:08:02.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.950 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:02.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:08:02.950 00:08:02.950 --- 10.0.0.2 ping statistics --- 00:08:02.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.950 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=64473 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 64473 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 64473 ']' 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.950 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.209 [2024-12-11 13:48:56.016304] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:08:03.209 [2024-12-11 13:48:56.016408] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.209 [2024-12-11 13:48:56.174362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.209 [2024-12-11 13:48:56.239899] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.209 [2024-12-11 13:48:56.239969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.209 [2024-12-11 13:48:56.239984] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.209 [2024-12-11 13:48:56.239994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.209 [2024-12-11 13:48:56.240004] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.209 [2024-12-11 13:48:56.240455] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.467 [2024-12-11 13:48:56.300906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.033 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.033 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:04.033 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:04.033 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:04.033 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:04.033 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.033 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:04.291 [2024-12-11 13:48:57.332530] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.550 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:04.550 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.550 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.550 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:04.550 ************************************ 00:08:04.550 START TEST lvs_grow_clean 00:08:04.550 ************************************ 00:08:04.550 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:04.550 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:04.550 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:04.550 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:04.550 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:04.550 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:04.550 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:04.550 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:04.550 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:04.550 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:04.808 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:04.808 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:05.066 13:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a1519669-80a4-4856-9727-9c93c127ccf2 00:08:05.066 13:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:05.066 13:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1519669-80a4-4856-9727-9c93c127ccf2 00:08:05.633 13:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:05.633 13:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:05.633 13:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a1519669-80a4-4856-9727-9c93c127ccf2 lvol 150 00:08:05.633 13:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7832d953-9469-43c6-8d1c-9957af9ee7e3 00:08:05.633 13:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:05.633 13:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:06.199 [2024-12-11 13:48:58.944590] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:06.199 [2024-12-11 13:48:58.944730] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:06.199 true 00:08:06.199 13:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:06.199 13:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1519669-80a4-4856-9727-9c93c127ccf2 00:08:06.458 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:06.458 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:06.717 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7832d953-9469-43c6-8d1c-9957af9ee7e3 00:08:06.975 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:07.233 [2024-12-11 13:49:00.089228] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:07.233 13:49:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:07.491 13:49:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=64561 00:08:07.491 13:49:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:07.491 13:49:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:07.491 13:49:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 64561 /var/tmp/bdevperf.sock 00:08:07.491 13:49:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 64561 ']' 00:08:07.491 13:49:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:07.491 13:49:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.491 13:49:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:07.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:07.491 13:49:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.491 13:49:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:07.491 [2024-12-11 13:49:00.429595] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:08:07.491 [2024-12-11 13:49:00.429736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64561 ] 00:08:07.749 [2024-12-11 13:49:00.584331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.749 [2024-12-11 13:49:00.652362] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.749 [2024-12-11 13:49:00.710741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.684 13:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.684 13:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:08.684 13:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:08.684 Nvme0n1 00:08:08.684 13:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:09.251 [ 00:08:09.251 { 00:08:09.251 "name": "Nvme0n1", 00:08:09.251 "aliases": [ 00:08:09.251 "7832d953-9469-43c6-8d1c-9957af9ee7e3" 00:08:09.251 ], 00:08:09.251 "product_name": "NVMe disk", 00:08:09.251 "block_size": 4096, 00:08:09.251 "num_blocks": 38912, 00:08:09.251 "uuid": "7832d953-9469-43c6-8d1c-9957af9ee7e3", 00:08:09.251 "numa_id": -1, 00:08:09.251 "assigned_rate_limits": { 00:08:09.251 "rw_ios_per_sec": 0, 00:08:09.251 "rw_mbytes_per_sec": 0, 00:08:09.251 "r_mbytes_per_sec": 0, 00:08:09.251 "w_mbytes_per_sec": 0 00:08:09.251 }, 00:08:09.251 "claimed": false, 00:08:09.251 "zoned": false, 00:08:09.251 "supported_io_types": { 00:08:09.251 "read": true, 00:08:09.251 "write": true, 00:08:09.251 "unmap": true, 00:08:09.251 "flush": true, 00:08:09.251 "reset": true, 00:08:09.251 "nvme_admin": true, 00:08:09.251 "nvme_io": true, 00:08:09.251 "nvme_io_md": false, 00:08:09.251 "write_zeroes": true, 00:08:09.251 "zcopy": false, 00:08:09.251 "get_zone_info": false, 00:08:09.251 "zone_management": false, 00:08:09.251 "zone_append": false, 00:08:09.251 "compare": true, 00:08:09.251 "compare_and_write": true, 00:08:09.251 "abort": true, 00:08:09.251 "seek_hole": false, 00:08:09.251 "seek_data": false, 00:08:09.251 "copy": true, 00:08:09.251 "nvme_iov_md": false 00:08:09.251 }, 00:08:09.251 "memory_domains": [ 00:08:09.251 { 00:08:09.251 "dma_device_id": "system", 00:08:09.251 "dma_device_type": 1 00:08:09.251 } 00:08:09.251 ], 00:08:09.251 "driver_specific": { 00:08:09.251 "nvme": [ 00:08:09.251 { 00:08:09.251 "trid": { 00:08:09.251 "trtype": "TCP", 00:08:09.251 "adrfam": "IPv4", 00:08:09.251 "traddr": "10.0.0.3", 00:08:09.251 "trsvcid": "4420", 00:08:09.251 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:09.251 }, 00:08:09.251 "ctrlr_data": { 00:08:09.251 "cntlid": 1, 00:08:09.251 "vendor_id": "0x8086", 00:08:09.251 "model_number": "SPDK bdev Controller", 00:08:09.251 "serial_number": "SPDK0", 00:08:09.251 "firmware_revision": "25.01", 00:08:09.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:09.251 "oacs": { 00:08:09.251 "security": 0, 00:08:09.251 "format": 0, 00:08:09.251 "firmware": 0, 00:08:09.251 "ns_manage": 0 00:08:09.251 }, 00:08:09.251 "multi_ctrlr": true, 00:08:09.251 "ana_reporting": false 00:08:09.251 }, 00:08:09.251 "vs": { 00:08:09.251 "nvme_version": "1.3" 00:08:09.251 }, 00:08:09.251 "ns_data": { 00:08:09.251 "id": 1, 00:08:09.251 "can_share": true 00:08:09.251 } 00:08:09.251 } 00:08:09.251 ], 00:08:09.251 "mp_policy": "active_passive" 00:08:09.251 } 00:08:09.251 } 00:08:09.251 ] 00:08:09.251 13:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=64589 00:08:09.251 13:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:09.251 13:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:09.251 Running I/O for 10 seconds... 00:08:10.186 Latency(us) 00:08:10.186 [2024-12-11T13:49:03.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.186 Nvme0n1 : 1.00 6571.00 25.67 0.00 0.00 0.00 0.00 0.00 00:08:10.186 [2024-12-11T13:49:03.233Z] =================================================================================================================== 00:08:10.186 [2024-12-11T13:49:03.233Z] Total : 6571.00 25.67 0.00 0.00 0.00 0.00 0.00 00:08:10.186 00:08:11.120 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a1519669-80a4-4856-9727-9c93c127ccf2 00:08:11.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.120 Nvme0n1 : 2.00 6524.00 25.48 0.00 0.00 0.00 0.00 0.00 00:08:11.120 [2024-12-11T13:49:04.167Z] =================================================================================================================== 00:08:11.120 [2024-12-11T13:49:04.167Z] Total : 6524.00 25.48 0.00 0.00 0.00 0.00 0.00 00:08:11.120 00:08:11.379 true 00:08:11.379 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:11.379 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1519669-80a4-4856-9727-9c93c127ccf2 00:08:11.945 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:11.945 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:11.945 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 64589 00:08:12.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.203 Nvme0n1 : 3.00 6593.00 25.75 0.00 0.00 0.00 0.00 0.00 00:08:12.203 [2024-12-11T13:49:05.250Z] =================================================================================================================== 00:08:12.203 [2024-12-11T13:49:05.250Z] Total : 6593.00 25.75 0.00 0.00 0.00 0.00 0.00 00:08:12.203 00:08:13.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.138 Nvme0n1 : 4.00 6595.75 25.76 0.00 0.00 0.00 0.00 0.00 00:08:13.138 [2024-12-11T13:49:06.185Z] =================================================================================================================== 00:08:13.138 [2024-12-11T13:49:06.185Z] Total : 6595.75 25.76 0.00 0.00 0.00 0.00 0.00 00:08:13.138 00:08:14.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.102 Nvme0n1 : 5.00 6597.40 25.77 0.00 0.00 0.00 0.00 0.00 00:08:14.102 [2024-12-11T13:49:07.149Z] =================================================================================================================== 00:08:14.102 [2024-12-11T13:49:07.149Z] Total : 6597.40 25.77 0.00 0.00 0.00 0.00 0.00 00:08:14.102 00:08:15.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.478 Nvme0n1 : 6.00 6533.83 25.52 0.00 0.00 0.00 0.00 0.00 00:08:15.478 [2024-12-11T13:49:08.525Z] =================================================================================================================== 00:08:15.478 [2024-12-11T13:49:08.525Z] Total : 6533.83 25.52 0.00 0.00 0.00 0.00 0.00 00:08:15.478 00:08:16.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.413 Nvme0n1 : 7.00 6543.86 25.56 0.00 0.00 0.00 0.00 0.00 00:08:16.413 [2024-12-11T13:49:09.460Z] =================================================================================================================== 00:08:16.413 [2024-12-11T13:49:09.460Z] Total : 6543.86 25.56 0.00 0.00 0.00 0.00 0.00 00:08:16.413 00:08:17.349 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.349 Nvme0n1 : 8.00 6551.38 25.59 0.00 0.00 0.00 0.00 0.00 00:08:17.349 [2024-12-11T13:49:10.396Z] =================================================================================================================== 00:08:17.349 [2024-12-11T13:49:10.396Z] Total : 6551.38 25.59 0.00 0.00 0.00 0.00 0.00 00:08:17.349 00:08:18.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.285 Nvme0n1 : 9.00 6585.44 25.72 0.00 0.00 0.00 0.00 0.00 00:08:18.285 [2024-12-11T13:49:11.332Z] =================================================================================================================== 00:08:18.285 [2024-12-11T13:49:11.332Z] Total : 6585.44 25.72 0.00 0.00 0.00 0.00 0.00 00:08:18.285 00:08:19.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.244 Nvme0n1 : 10.00 6600.00 25.78 0.00 0.00 0.00 0.00 0.00 00:08:19.244 [2024-12-11T13:49:12.291Z] =================================================================================================================== 00:08:19.244 [2024-12-11T13:49:12.292Z] Total : 6600.00 25.78 0.00 0.00 0.00 0.00 0.00 00:08:19.245 00:08:19.245 00:08:19.245 Latency(us) 00:08:19.245 [2024-12-11T13:49:12.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.245 Nvme0n1 : 10.00 6609.80 25.82 0.00 0.00 19358.83 11260.28 121062.87 00:08:19.245 [2024-12-11T13:49:12.292Z] =================================================================================================================== 00:08:19.245 [2024-12-11T13:49:12.292Z] Total : 6609.80 25.82 0.00 0.00 19358.83 11260.28 121062.87 00:08:19.245 { 00:08:19.245 "results": [ 00:08:19.245 { 00:08:19.245 "job": "Nvme0n1", 00:08:19.245 "core_mask": "0x2", 00:08:19.245 "workload": "randwrite", 00:08:19.245 "status": "finished", 00:08:19.245 "queue_depth": 128, 00:08:19.245 "io_size": 4096, 00:08:19.245 "runtime": 10.004542, 00:08:19.245 "iops": 6609.797829825693, 00:08:19.245 "mibps": 25.819522772756613, 00:08:19.245 "io_failed": 0, 00:08:19.245 "io_timeout": 0, 00:08:19.245 "avg_latency_us": 19358.831380243275, 00:08:19.245 "min_latency_us": 11260.276363636363, 00:08:19.245 "max_latency_us": 121062.86545454545 00:08:19.245 } 00:08:19.245 ], 00:08:19.245 "core_count": 1 00:08:19.245 } 00:08:19.245 13:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 64561 00:08:19.245 13:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 64561 ']' 00:08:19.245 13:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 64561 00:08:19.245 13:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:19.245 13:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.245 13:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64561 00:08:19.245 killing process with pid 64561 00:08:19.245 Received shutdown signal, test time was about 10.000000 seconds 00:08:19.245 00:08:19.245 Latency(us) 00:08:19.245 [2024-12-11T13:49:12.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.245 [2024-12-11T13:49:12.292Z] =================================================================================================================== 00:08:19.245 [2024-12-11T13:49:12.292Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:19.245 13:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:19.245 13:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:19.245 13:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64561' 00:08:19.245 13:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 64561 00:08:19.245 13:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 64561 00:08:19.521 13:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:19.780 13:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:20.040 13:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1519669-80a4-4856-9727-9c93c127ccf2 00:08:20.040 13:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:20.299 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:20.299 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:20.299 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:20.557 [2024-12-11 13:49:13.503338] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:20.557 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1519669-80a4-4856-9727-9c93c127ccf2 00:08:20.557 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:20.557 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1519669-80a4-4856-9727-9c93c127ccf2 00:08:20.557 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:20.557 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.557 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:20.557 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.557 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:20.557 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.557 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:20.557 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:20.558 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1519669-80a4-4856-9727-9c93c127ccf2 00:08:20.816 request: 00:08:20.816 { 00:08:20.816 "uuid": "a1519669-80a4-4856-9727-9c93c127ccf2", 00:08:20.816 "method": "bdev_lvol_get_lvstores", 00:08:20.816 "req_id": 1 00:08:20.816 } 00:08:20.816 Got JSON-RPC error response 00:08:20.816 response: 00:08:20.816 { 00:08:20.816 "code": -19, 00:08:20.816 "message": "No such device" 00:08:20.816 } 00:08:20.816 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:20.816 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:20.816 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:20.816 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:20.816 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:21.075 aio_bdev 00:08:21.075 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7832d953-9469-43c6-8d1c-9957af9ee7e3 00:08:21.075 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=7832d953-9469-43c6-8d1c-9957af9ee7e3 00:08:21.075 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:21.075 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:21.075 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:21.075 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:21.075 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:21.642 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7832d953-9469-43c6-8d1c-9957af9ee7e3 -t 2000 00:08:21.642 [ 00:08:21.642 { 00:08:21.642 "name": "7832d953-9469-43c6-8d1c-9957af9ee7e3", 00:08:21.642 "aliases": [ 00:08:21.642 "lvs/lvol" 00:08:21.642 ], 00:08:21.642 "product_name": "Logical Volume", 00:08:21.642 "block_size": 4096, 00:08:21.642 "num_blocks": 38912, 00:08:21.642 "uuid": "7832d953-9469-43c6-8d1c-9957af9ee7e3", 00:08:21.642 "assigned_rate_limits": { 00:08:21.642 "rw_ios_per_sec": 0, 00:08:21.642 "rw_mbytes_per_sec": 0, 00:08:21.642 "r_mbytes_per_sec": 0, 00:08:21.642 "w_mbytes_per_sec": 0 00:08:21.642 }, 00:08:21.642 "claimed": false, 00:08:21.642 "zoned": false, 00:08:21.642 "supported_io_types": { 00:08:21.642 "read": true, 00:08:21.642 "write": true, 00:08:21.642 "unmap": true, 00:08:21.642 "flush": false, 00:08:21.642 "reset": true, 00:08:21.642 "nvme_admin": false, 00:08:21.642 "nvme_io": false, 00:08:21.642 "nvme_io_md": false, 00:08:21.642 "write_zeroes": true, 00:08:21.642 "zcopy": false, 00:08:21.642 "get_zone_info": false, 00:08:21.642 "zone_management": false, 00:08:21.642 "zone_append": false, 00:08:21.642 "compare": false, 00:08:21.642 "compare_and_write": false, 00:08:21.642 "abort": false, 00:08:21.642 "seek_hole": true, 00:08:21.642 "seek_data": true, 00:08:21.642 "copy": false, 00:08:21.642 "nvme_iov_md": false 00:08:21.642 }, 00:08:21.643 "driver_specific": { 00:08:21.643 "lvol": { 00:08:21.643 "lvol_store_uuid": "a1519669-80a4-4856-9727-9c93c127ccf2", 00:08:21.643 "base_bdev": "aio_bdev", 00:08:21.643 "thin_provision": false, 00:08:21.643 "num_allocated_clusters": 38, 00:08:21.643 "snapshot": false, 00:08:21.643 "clone": false, 00:08:21.643 "esnap_clone": false 00:08:21.643 } 00:08:21.643 } 00:08:21.643 } 00:08:21.643 ] 00:08:21.643 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:21.643 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1519669-80a4-4856-9727-9c93c127ccf2 00:08:21.643 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:21.901 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:21.901 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1519669-80a4-4856-9727-9c93c127ccf2 00:08:21.901 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:22.160 13:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:22.160 13:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7832d953-9469-43c6-8d1c-9957af9ee7e3 00:08:22.726 13:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a1519669-80a4-4856-9727-9c93c127ccf2 00:08:22.726 13:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:22.985 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:23.552 ************************************ 00:08:23.552 END TEST lvs_grow_clean 00:08:23.552 ************************************ 00:08:23.552 00:08:23.552 real 0m19.025s 00:08:23.552 user 0m17.869s 00:08:23.552 sys 0m2.744s 00:08:23.552 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.552 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:23.552 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:23.552 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:23.552 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.552 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.552 ************************************ 00:08:23.552 START TEST lvs_grow_dirty 00:08:23.552 ************************************ 00:08:23.552 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:23.552 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:23.552 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:23.552 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:23.552 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:23.552 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:23.552 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:23.552 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:23.552 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:23.552 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:23.811 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:23.811 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:24.070 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c1167af2-5ea0-45e3-82c6-a9dae04ed2e2 00:08:24.070 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1167af2-5ea0-45e3-82c6-a9dae04ed2e2 00:08:24.070 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:24.328 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:24.328 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:24.328 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c1167af2-5ea0-45e3-82c6-a9dae04ed2e2 lvol 150 00:08:24.896 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c8f593e7-f028-42d2-b0cb-0403713ad737 00:08:24.896 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:24.896 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:24.896 [2024-12-11 13:49:17.921641] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:24.896 [2024-12-11 13:49:17.921734] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:24.896 true 00:08:25.155 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:25.155 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1167af2-5ea0-45e3-82c6-a9dae04ed2e2 00:08:25.413 13:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:25.413 13:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:25.672 13:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c8f593e7-f028-42d2-b0cb-0403713ad737 00:08:25.989 13:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:25.989 [2024-12-11 13:49:19.022325] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:26.260 13:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:26.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:26.519 13:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:26.519 13:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=64842 00:08:26.519 13:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:26.519 13:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 64842 /var/tmp/bdevperf.sock 00:08:26.519 13:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 64842 ']' 00:08:26.519 13:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:26.519 13:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.519 13:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:26.519 13:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.519 13:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:26.519 [2024-12-11 13:49:19.393430] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:08:26.519 [2024-12-11 13:49:19.393556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64842 ] 00:08:26.519 [2024-12-11 13:49:19.542384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.778 [2024-12-11 13:49:19.602422] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.778 [2024-12-11 13:49:19.660859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.778 13:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.778 13:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:26.778 13:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:27.037 Nvme0n1 00:08:27.037 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:27.296 [ 00:08:27.296 { 00:08:27.296 "name": "Nvme0n1", 00:08:27.296 "aliases": [ 00:08:27.296 "c8f593e7-f028-42d2-b0cb-0403713ad737" 00:08:27.296 ], 00:08:27.296 "product_name": "NVMe disk", 00:08:27.296 "block_size": 4096, 00:08:27.296 "num_blocks": 38912, 00:08:27.296 "uuid": "c8f593e7-f028-42d2-b0cb-0403713ad737", 00:08:27.296 "numa_id": -1, 00:08:27.296 "assigned_rate_limits": { 00:08:27.296 "rw_ios_per_sec": 0, 00:08:27.296 "rw_mbytes_per_sec": 0, 00:08:27.296 "r_mbytes_per_sec": 0, 00:08:27.296 "w_mbytes_per_sec": 0 00:08:27.296 }, 00:08:27.296 "claimed": false, 00:08:27.296 "zoned": false, 00:08:27.296 "supported_io_types": { 00:08:27.296 "read": true, 00:08:27.296 "write": true, 00:08:27.296 "unmap": true, 00:08:27.296 "flush": true, 00:08:27.296 "reset": true, 00:08:27.296 "nvme_admin": true, 00:08:27.296 "nvme_io": true, 00:08:27.296 "nvme_io_md": false, 00:08:27.296 "write_zeroes": true, 00:08:27.296 "zcopy": false, 00:08:27.296 "get_zone_info": false, 00:08:27.296 "zone_management": false, 00:08:27.296 "zone_append": false, 00:08:27.296 "compare": true, 00:08:27.296 "compare_and_write": true, 00:08:27.296 "abort": true, 00:08:27.296 "seek_hole": false, 00:08:27.296 "seek_data": false, 00:08:27.296 "copy": true, 00:08:27.296 "nvme_iov_md": false 00:08:27.296 }, 00:08:27.296 "memory_domains": [ 00:08:27.296 { 00:08:27.296 "dma_device_id": "system", 00:08:27.296 "dma_device_type": 1 00:08:27.296 } 00:08:27.296 ], 00:08:27.296 "driver_specific": { 00:08:27.296 "nvme": [ 00:08:27.297 { 00:08:27.297 "trid": { 00:08:27.297 "trtype": "TCP", 00:08:27.297 "adrfam": "IPv4", 00:08:27.297 "traddr": "10.0.0.3", 00:08:27.297 "trsvcid": "4420", 00:08:27.297 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:27.297 }, 00:08:27.297 "ctrlr_data": { 00:08:27.297 "cntlid": 1, 00:08:27.297 "vendor_id": "0x8086", 00:08:27.297 "model_number": "SPDK bdev Controller", 00:08:27.297 "serial_number": "SPDK0", 00:08:27.297 "firmware_revision": "25.01", 00:08:27.297 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:27.297 "oacs": { 00:08:27.297 "security": 0, 00:08:27.297 "format": 0, 00:08:27.297 "firmware": 0, 00:08:27.297 "ns_manage": 0 00:08:27.297 }, 00:08:27.297 "multi_ctrlr": true, 00:08:27.297 "ana_reporting": false 00:08:27.297 }, 00:08:27.297 "vs": { 00:08:27.297 "nvme_version": "1.3" 00:08:27.297 }, 00:08:27.297 "ns_data": { 00:08:27.297 "id": 1, 00:08:27.297 "can_share": true 00:08:27.297 } 00:08:27.297 } 00:08:27.297 ], 00:08:27.297 "mp_policy": "active_passive" 00:08:27.297 } 00:08:27.297 } 00:08:27.297 ] 00:08:27.297 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=64858 00:08:27.297 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:27.297 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:27.555 Running I/O for 10 seconds... 00:08:28.491 Latency(us) 00:08:28.491 [2024-12-11T13:49:21.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.491 Nvme0n1 : 1.00 6795.00 26.54 0.00 0.00 0.00 0.00 0.00 00:08:28.491 [2024-12-11T13:49:21.538Z] =================================================================================================================== 00:08:28.491 [2024-12-11T13:49:21.538Z] Total : 6795.00 26.54 0.00 0.00 0.00 0.00 0.00 00:08:28.491 00:08:29.426 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c1167af2-5ea0-45e3-82c6-a9dae04ed2e2 00:08:29.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.426 Nvme0n1 : 2.00 6826.50 26.67 0.00 0.00 0.00 0.00 0.00 00:08:29.426 [2024-12-11T13:49:22.473Z] =================================================================================================================== 00:08:29.426 [2024-12-11T13:49:22.473Z] Total : 6826.50 26.67 0.00 0.00 0.00 0.00 0.00 00:08:29.426 00:08:29.685 true 00:08:29.685 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1167af2-5ea0-45e3-82c6-a9dae04ed2e2 00:08:29.685 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:29.943 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:29.943 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:29.943 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 64858 00:08:30.510 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.510 Nvme0n1 : 3.00 6879.33 26.87 0.00 0.00 0.00 0.00 0.00 00:08:30.510 [2024-12-11T13:49:23.557Z] =================================================================================================================== 00:08:30.510 [2024-12-11T13:49:23.557Z] Total : 6879.33 26.87 0.00 0.00 0.00 0.00 0.00 00:08:30.510 00:08:31.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.496 Nvme0n1 : 4.00 6874.00 26.85 0.00 0.00 0.00 0.00 0.00 00:08:31.496 [2024-12-11T13:49:24.543Z] =================================================================================================================== 00:08:31.496 [2024-12-11T13:49:24.543Z] Total : 6874.00 26.85 0.00 0.00 0.00 0.00 0.00 00:08:31.496 00:08:32.431 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.431 Nvme0n1 : 5.00 6847.60 26.75 0.00 0.00 0.00 0.00 0.00 00:08:32.431 [2024-12-11T13:49:25.478Z] =================================================================================================================== 00:08:32.431 [2024-12-11T13:49:25.478Z] Total : 6847.60 26.75 0.00 0.00 0.00 0.00 0.00 00:08:32.431 00:08:33.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.366 Nvme0n1 : 6.00 6574.17 25.68 0.00 0.00 0.00 0.00 0.00 00:08:33.366 [2024-12-11T13:49:26.413Z] =================================================================================================================== 00:08:33.366 [2024-12-11T13:49:26.413Z] Total : 6574.17 25.68 0.00 0.00 0.00 0.00 0.00 00:08:33.366 00:08:34.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.741 Nvme0n1 : 7.00 6542.14 25.56 0.00 0.00 0.00 0.00 0.00 00:08:34.741 [2024-12-11T13:49:27.788Z] =================================================================================================================== 00:08:34.741 [2024-12-11T13:49:27.788Z] Total : 6542.14 25.56 0.00 0.00 0.00 0.00 0.00 00:08:34.741 00:08:35.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.677 Nvme0n1 : 8.00 6549.88 25.59 0.00 0.00 0.00 0.00 0.00 00:08:35.677 [2024-12-11T13:49:28.724Z] =================================================================================================================== 00:08:35.677 [2024-12-11T13:49:28.724Z] Total : 6549.88 25.59 0.00 0.00 0.00 0.00 0.00 00:08:35.677 00:08:36.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.616 Nvme0n1 : 9.00 6555.89 25.61 0.00 0.00 0.00 0.00 0.00 00:08:36.616 [2024-12-11T13:49:29.663Z] =================================================================================================================== 00:08:36.616 [2024-12-11T13:49:29.663Z] Total : 6555.89 25.61 0.00 0.00 0.00 0.00 0.00 00:08:36.616 00:08:37.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.589 Nvme0n1 : 10.00 6560.70 25.63 0.00 0.00 0.00 0.00 0.00 00:08:37.589 [2024-12-11T13:49:30.636Z] =================================================================================================================== 00:08:37.589 [2024-12-11T13:49:30.636Z] Total : 6560.70 25.63 0.00 0.00 0.00 0.00 0.00 00:08:37.589 00:08:37.589 00:08:37.589 Latency(us) 00:08:37.589 [2024-12-11T13:49:30.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.589 Nvme0n1 : 10.02 6559.42 25.62 0.00 0.00 19508.19 5719.51 223060.71 00:08:37.589 [2024-12-11T13:49:30.636Z] =================================================================================================================== 00:08:37.589 [2024-12-11T13:49:30.636Z] Total : 6559.42 25.62 0.00 0.00 19508.19 5719.51 223060.71 00:08:37.589 { 00:08:37.589 "results": [ 00:08:37.589 { 00:08:37.589 "job": "Nvme0n1", 00:08:37.589 "core_mask": "0x2", 00:08:37.589 "workload": "randwrite", 00:08:37.589 "status": "finished", 00:08:37.589 "queue_depth": 128, 00:08:37.589 "io_size": 4096, 00:08:37.589 "runtime": 10.021469, 00:08:37.589 "iops": 6559.417586383793, 00:08:37.589 "mibps": 25.62272494681169, 00:08:37.589 "io_failed": 0, 00:08:37.589 "io_timeout": 0, 00:08:37.589 "avg_latency_us": 19508.190670806336, 00:08:37.589 "min_latency_us": 5719.505454545455, 00:08:37.589 "max_latency_us": 223060.71272727274 00:08:37.589 } 00:08:37.589 ], 00:08:37.589 "core_count": 1 00:08:37.589 } 00:08:37.589 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 64842 00:08:37.589 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 64842 ']' 00:08:37.589 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 64842 00:08:37.589 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:37.589 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.589 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64842 00:08:37.589 killing process with pid 64842 00:08:37.589 Received shutdown signal, test time was about 10.000000 seconds 00:08:37.589 00:08:37.589 Latency(us) 00:08:37.589 [2024-12-11T13:49:30.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.589 [2024-12-11T13:49:30.636Z] =================================================================================================================== 00:08:37.589 [2024-12-11T13:49:30.636Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:37.589 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:37.589 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:37.589 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64842' 00:08:37.589 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 64842 00:08:37.589 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 64842 00:08:37.847 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:38.105 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:38.364 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1167af2-5ea0-45e3-82c6-a9dae04ed2e2 00:08:38.364 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:38.622 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:38.622 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:38.622 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 64473 00:08:38.623 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 64473 00:08:38.881 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 64473 Killed "${NVMF_APP[@]}" "$@" 00:08:38.881 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:38.881 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:38.881 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:38.881 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.881 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:38.881 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=64991 00:08:38.881 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:38.881 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 64991 00:08:38.881 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 64991 ']' 00:08:38.881 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.881 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.881 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.881 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.881 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:38.881 [2024-12-11 13:49:31.764768] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:08:38.881 [2024-12-11 13:49:31.765712] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.881 [2024-12-11 13:49:31.924730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.140 [2024-12-11 13:49:31.981113] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.140 [2024-12-11 13:49:31.981212] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.140 [2024-12-11 13:49:31.981224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.140 [2024-12-11 13:49:31.981232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.140 [2024-12-11 13:49:31.981239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.140 [2024-12-11 13:49:31.981688] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.140 [2024-12-11 13:49:32.047649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.140 13:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.140 13:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:39.140 13:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:39.140 13:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.140 13:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:39.140 13:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.140 13:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:39.398 [2024-12-11 13:49:32.414977] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:39.398 [2024-12-11 13:49:32.415322] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:39.398 [2024-12-11 13:49:32.415507] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:39.657 13:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:39.657 13:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c8f593e7-f028-42d2-b0cb-0403713ad737 00:08:39.657 13:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c8f593e7-f028-42d2-b0cb-0403713ad737 00:08:39.657 13:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.657 13:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:39.657 13:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.657 13:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.657 13:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:39.915 13:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c8f593e7-f028-42d2-b0cb-0403713ad737 -t 2000 00:08:40.173 [ 00:08:40.173 { 00:08:40.173 "name": "c8f593e7-f028-42d2-b0cb-0403713ad737", 00:08:40.173 "aliases": [ 00:08:40.173 "lvs/lvol" 00:08:40.173 ], 00:08:40.173 "product_name": "Logical Volume", 00:08:40.173 "block_size": 4096, 00:08:40.173 "num_blocks": 38912, 00:08:40.173 "uuid": "c8f593e7-f028-42d2-b0cb-0403713ad737", 00:08:40.173 "assigned_rate_limits": { 00:08:40.173 "rw_ios_per_sec": 0, 00:08:40.173 "rw_mbytes_per_sec": 0, 00:08:40.173 "r_mbytes_per_sec": 0, 00:08:40.173 "w_mbytes_per_sec": 0 00:08:40.173 }, 00:08:40.173 "claimed": false, 00:08:40.173 "zoned": false, 00:08:40.173 "supported_io_types": { 00:08:40.173 "read": true, 00:08:40.173 "write": true, 00:08:40.173 "unmap": true, 00:08:40.173 "flush": false, 00:08:40.173 "reset": true, 00:08:40.173 "nvme_admin": false, 00:08:40.173 "nvme_io": false, 00:08:40.173 "nvme_io_md": false, 00:08:40.173 "write_zeroes": true, 00:08:40.173 "zcopy": false, 00:08:40.173 "get_zone_info": false, 00:08:40.173 "zone_management": false, 00:08:40.173 "zone_append": false, 00:08:40.173 "compare": false, 00:08:40.173 "compare_and_write": false, 00:08:40.173 "abort": false, 00:08:40.173 "seek_hole": true, 00:08:40.173 "seek_data": true, 00:08:40.173 "copy": false, 00:08:40.173 "nvme_iov_md": false 00:08:40.173 }, 00:08:40.173 "driver_specific": { 00:08:40.173 "lvol": { 00:08:40.173 "lvol_store_uuid": "c1167af2-5ea0-45e3-82c6-a9dae04ed2e2", 00:08:40.173 "base_bdev": "aio_bdev", 00:08:40.173 "thin_provision": false, 00:08:40.173 "num_allocated_clusters": 38, 00:08:40.173 "snapshot": false, 00:08:40.173 "clone": false, 00:08:40.173 "esnap_clone": false 00:08:40.173 } 00:08:40.173 } 00:08:40.173 } 00:08:40.173 ] 00:08:40.173 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:40.173 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1167af2-5ea0-45e3-82c6-a9dae04ed2e2 00:08:40.173 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:40.432 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:40.432 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:40.432 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1167af2-5ea0-45e3-82c6-a9dae04ed2e2 00:08:40.690 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:40.690 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:40.948 [2024-12-11 13:49:33.956766] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:40.949 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1167af2-5ea0-45e3-82c6-a9dae04ed2e2 00:08:40.949 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:40.949 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1167af2-5ea0-45e3-82c6-a9dae04ed2e2 00:08:40.949 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:41.207 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.207 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:41.207 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.207 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:41.207 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.207 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:41.207 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:41.207 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1167af2-5ea0-45e3-82c6-a9dae04ed2e2 00:08:41.465 request: 00:08:41.465 { 00:08:41.465 "uuid": "c1167af2-5ea0-45e3-82c6-a9dae04ed2e2", 00:08:41.465 "method": "bdev_lvol_get_lvstores", 00:08:41.465 "req_id": 1 00:08:41.465 } 00:08:41.465 Got JSON-RPC error response 00:08:41.465 response: 00:08:41.465 { 00:08:41.465 "code": -19, 00:08:41.465 "message": "No such device" 00:08:41.465 } 00:08:41.465 13:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:41.465 13:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.465 13:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:41.465 13:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.465 13:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:41.723 aio_bdev 00:08:41.723 13:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c8f593e7-f028-42d2-b0cb-0403713ad737 00:08:41.723 13:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c8f593e7-f028-42d2-b0cb-0403713ad737 00:08:41.723 13:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:41.723 13:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:41.723 13:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:41.723 13:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:41.723 13:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:41.981 13:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c8f593e7-f028-42d2-b0cb-0403713ad737 -t 2000 00:08:42.238 [ 00:08:42.238 { 00:08:42.238 "name": "c8f593e7-f028-42d2-b0cb-0403713ad737", 00:08:42.238 "aliases": [ 00:08:42.238 "lvs/lvol" 00:08:42.238 ], 00:08:42.238 "product_name": "Logical Volume", 00:08:42.238 "block_size": 4096, 00:08:42.238 "num_blocks": 38912, 00:08:42.238 "uuid": "c8f593e7-f028-42d2-b0cb-0403713ad737", 00:08:42.238 "assigned_rate_limits": { 00:08:42.238 "rw_ios_per_sec": 0, 00:08:42.238 "rw_mbytes_per_sec": 0, 00:08:42.238 "r_mbytes_per_sec": 0, 00:08:42.238 "w_mbytes_per_sec": 0 00:08:42.238 }, 00:08:42.238 "claimed": false, 00:08:42.238 "zoned": false, 00:08:42.238 "supported_io_types": { 00:08:42.238 "read": true, 00:08:42.238 "write": true, 00:08:42.238 "unmap": true, 00:08:42.238 "flush": false, 00:08:42.238 "reset": true, 00:08:42.238 "nvme_admin": false, 00:08:42.238 "nvme_io": false, 00:08:42.238 "nvme_io_md": false, 00:08:42.238 "write_zeroes": true, 00:08:42.238 "zcopy": false, 00:08:42.238 "get_zone_info": false, 00:08:42.238 "zone_management": false, 00:08:42.238 "zone_append": false, 00:08:42.238 "compare": false, 00:08:42.238 "compare_and_write": false, 00:08:42.238 "abort": false, 00:08:42.238 "seek_hole": true, 00:08:42.238 "seek_data": true, 00:08:42.238 "copy": false, 00:08:42.238 "nvme_iov_md": false 00:08:42.238 }, 00:08:42.238 "driver_specific": { 00:08:42.238 "lvol": { 00:08:42.238 "lvol_store_uuid": "c1167af2-5ea0-45e3-82c6-a9dae04ed2e2", 00:08:42.238 "base_bdev": "aio_bdev", 00:08:42.238 "thin_provision": false, 00:08:42.238 "num_allocated_clusters": 38, 00:08:42.238 "snapshot": false, 00:08:42.238 "clone": false, 00:08:42.238 "esnap_clone": false 00:08:42.238 } 00:08:42.238 } 00:08:42.238 } 00:08:42.238 ] 00:08:42.238 13:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:42.238 13:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1167af2-5ea0-45e3-82c6-a9dae04ed2e2 00:08:42.238 13:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:42.496 13:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:42.496 13:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1167af2-5ea0-45e3-82c6-a9dae04ed2e2 00:08:42.496 13:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:42.757 13:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:42.757 13:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c8f593e7-f028-42d2-b0cb-0403713ad737 00:08:43.014 13:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c1167af2-5ea0-45e3-82c6-a9dae04ed2e2 00:08:43.272 13:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:43.839 13:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:44.097 ************************************ 00:08:44.097 END TEST lvs_grow_dirty 00:08:44.097 ************************************ 00:08:44.097 00:08:44.097 real 0m20.572s 00:08:44.097 user 0m43.490s 00:08:44.097 sys 0m8.370s 00:08:44.097 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.097 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:44.097 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:44.097 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:44.097 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:44.097 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:44.097 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:44.097 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:44.097 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:44.097 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:44.097 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:44.097 nvmf_trace.0 00:08:44.097 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:44.097 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:44.097 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:44.097 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:44.355 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:44.355 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:44.355 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:44.355 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:44.355 rmmod nvme_tcp 00:08:44.355 rmmod nvme_fabrics 00:08:44.355 rmmod nvme_keyring 00:08:44.355 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:44.355 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:44.355 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:44.355 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 64991 ']' 00:08:44.355 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 64991 00:08:44.355 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 64991 ']' 00:08:44.355 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 64991 00:08:44.355 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:44.355 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.355 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64991 00:08:44.613 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.613 killing process with pid 64991 00:08:44.613 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.613 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64991' 00:08:44.613 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 64991 00:08:44.613 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 64991 00:08:44.613 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:44.613 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:44.613 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:44.613 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:44.613 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:44.613 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:44.613 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:44.613 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:44.613 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:44.613 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:44.613 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:44.871 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:44.871 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:44.871 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:44.871 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:44.871 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:44.871 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:44.872 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:44.872 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:44.872 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:44.872 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:44.872 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:44.872 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:44.872 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.872 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.872 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.872 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:44.872 00:08:44.872 real 0m42.507s 00:08:44.872 user 1m8.021s 00:08:44.872 sys 0m12.065s 00:08:44.872 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.872 ************************************ 00:08:44.872 END TEST nvmf_lvs_grow 00:08:44.872 ************************************ 00:08:44.872 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:45.130 13:49:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:45.130 13:49:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:45.130 13:49:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.130 13:49:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.130 ************************************ 00:08:45.130 START TEST nvmf_bdev_io_wait 00:08:45.130 ************************************ 00:08:45.130 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:45.130 * Looking for test storage... 00:08:45.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:45.130 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:45.130 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:45.130 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:45.130 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:45.130 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.130 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.130 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.130 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.130 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.130 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.130 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.130 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.130 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.130 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.130 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.130 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:45.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.131 --rc genhtml_branch_coverage=1 00:08:45.131 --rc genhtml_function_coverage=1 00:08:45.131 --rc genhtml_legend=1 00:08:45.131 --rc geninfo_all_blocks=1 00:08:45.131 --rc geninfo_unexecuted_blocks=1 00:08:45.131 00:08:45.131 ' 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:45.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.131 --rc genhtml_branch_coverage=1 00:08:45.131 --rc genhtml_function_coverage=1 00:08:45.131 --rc genhtml_legend=1 00:08:45.131 --rc geninfo_all_blocks=1 00:08:45.131 --rc geninfo_unexecuted_blocks=1 00:08:45.131 00:08:45.131 ' 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:45.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.131 --rc genhtml_branch_coverage=1 00:08:45.131 --rc genhtml_function_coverage=1 00:08:45.131 --rc genhtml_legend=1 00:08:45.131 --rc geninfo_all_blocks=1 00:08:45.131 --rc geninfo_unexecuted_blocks=1 00:08:45.131 00:08:45.131 ' 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:45.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.131 --rc genhtml_branch_coverage=1 00:08:45.131 --rc genhtml_function_coverage=1 00:08:45.131 --rc genhtml_legend=1 00:08:45.131 --rc geninfo_all_blocks=1 00:08:45.131 --rc geninfo_unexecuted_blocks=1 00:08:45.131 00:08:45.131 ' 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.131 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:45.131 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:45.132 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.132 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:45.132 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:45.132 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:45.132 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:45.132 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:45.132 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:45.132 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.132 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:45.132 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:45.132 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:45.132 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:45.132 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:45.390 Cannot find device "nvmf_init_br" 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:45.390 Cannot find device "nvmf_init_br2" 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:45.390 Cannot find device "nvmf_tgt_br" 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:45.390 Cannot find device "nvmf_tgt_br2" 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:45.390 Cannot find device "nvmf_init_br" 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:45.390 Cannot find device "nvmf_init_br2" 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:45.390 Cannot find device "nvmf_tgt_br" 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:45.390 Cannot find device "nvmf_tgt_br2" 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:45.390 Cannot find device "nvmf_br" 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:45.390 Cannot find device "nvmf_init_if" 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:45.390 Cannot find device "nvmf_init_if2" 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:45.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:45.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:45.390 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:45.649 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:45.649 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:08:45.649 00:08:45.649 --- 10.0.0.3 ping statistics --- 00:08:45.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.649 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:45.649 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:45.649 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:08:45.649 00:08:45.649 --- 10.0.0.4 ping statistics --- 00:08:45.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.649 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:45.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:08:45.649 00:08:45.649 --- 10.0.0.1 ping statistics --- 00:08:45.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.649 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:45.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:08:45.649 00:08:45.649 --- 10.0.0.2 ping statistics --- 00:08:45.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.649 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=65359 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 65359 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 65359 ']' 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.649 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.649 [2024-12-11 13:49:38.646455] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:08:45.649 [2024-12-11 13:49:38.646582] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.908 [2024-12-11 13:49:38.796221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.908 [2024-12-11 13:49:38.860319] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.908 [2024-12-11 13:49:38.860399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.908 [2024-12-11 13:49:38.860427] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.908 [2024-12-11 13:49:38.860435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.908 [2024-12-11 13:49:38.860441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.908 [2024-12-11 13:49:38.861718] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.908 [2024-12-11 13:49:38.861813] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.908 [2024-12-11 13:49:38.861992] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.908 [2024-12-11 13:49:38.861996] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.908 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.908 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:45.908 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:45.908 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.908 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.908 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.908 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:45.908 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.908 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:46.166 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.166 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:46.166 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.166 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:46.166 [2024-12-11 13:49:39.017295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:46.166 [2024-12-11 13:49:39.033723] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:46.166 Malloc0 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:46.166 [2024-12-11 13:49:39.090814] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=65381 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=65383 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:46.166 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:46.166 { 00:08:46.166 "params": { 00:08:46.166 "name": "Nvme$subsystem", 00:08:46.166 "trtype": "$TEST_TRANSPORT", 00:08:46.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:46.166 "adrfam": "ipv4", 00:08:46.166 "trsvcid": "$NVMF_PORT", 00:08:46.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:46.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:46.166 "hdgst": ${hdgst:-false}, 00:08:46.166 "ddgst": ${ddgst:-false} 00:08:46.167 }, 00:08:46.167 "method": "bdev_nvme_attach_controller" 00:08:46.167 } 00:08:46.167 EOF 00:08:46.167 )") 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=65385 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:46.167 { 00:08:46.167 "params": { 00:08:46.167 "name": "Nvme$subsystem", 00:08:46.167 "trtype": "$TEST_TRANSPORT", 00:08:46.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:46.167 "adrfam": "ipv4", 00:08:46.167 "trsvcid": "$NVMF_PORT", 00:08:46.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:46.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:46.167 "hdgst": ${hdgst:-false}, 00:08:46.167 "ddgst": ${ddgst:-false} 00:08:46.167 }, 00:08:46.167 "method": "bdev_nvme_attach_controller" 00:08:46.167 } 00:08:46.167 EOF 00:08:46.167 )") 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=65388 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:46.167 { 00:08:46.167 "params": { 00:08:46.167 "name": "Nvme$subsystem", 00:08:46.167 "trtype": "$TEST_TRANSPORT", 00:08:46.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:46.167 "adrfam": "ipv4", 00:08:46.167 "trsvcid": "$NVMF_PORT", 00:08:46.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:46.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:46.167 "hdgst": ${hdgst:-false}, 00:08:46.167 "ddgst": ${ddgst:-false} 00:08:46.167 }, 00:08:46.167 "method": "bdev_nvme_attach_controller" 00:08:46.167 } 00:08:46.167 EOF 00:08:46.167 )") 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:46.167 "params": { 00:08:46.167 "name": "Nvme1", 00:08:46.167 "trtype": "tcp", 00:08:46.167 "traddr": "10.0.0.3", 00:08:46.167 "adrfam": "ipv4", 00:08:46.167 "trsvcid": "4420", 00:08:46.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:46.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:46.167 "hdgst": false, 00:08:46.167 "ddgst": false 00:08:46.167 }, 00:08:46.167 "method": "bdev_nvme_attach_controller" 00:08:46.167 }' 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:46.167 "params": { 00:08:46.167 "name": "Nvme1", 00:08:46.167 "trtype": "tcp", 00:08:46.167 "traddr": "10.0.0.3", 00:08:46.167 "adrfam": "ipv4", 00:08:46.167 "trsvcid": "4420", 00:08:46.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:46.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:46.167 "hdgst": false, 00:08:46.167 "ddgst": false 00:08:46.167 }, 00:08:46.167 "method": "bdev_nvme_attach_controller" 00:08:46.167 }' 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:46.167 { 00:08:46.167 "params": { 00:08:46.167 "name": "Nvme$subsystem", 00:08:46.167 "trtype": "$TEST_TRANSPORT", 00:08:46.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:46.167 "adrfam": "ipv4", 00:08:46.167 "trsvcid": "$NVMF_PORT", 00:08:46.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:46.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:46.167 "hdgst": ${hdgst:-false}, 00:08:46.167 "ddgst": ${ddgst:-false} 00:08:46.167 }, 00:08:46.167 "method": "bdev_nvme_attach_controller" 00:08:46.167 } 00:08:46.167 EOF 00:08:46.167 )") 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:46.167 "params": { 00:08:46.167 "name": "Nvme1", 00:08:46.167 "trtype": "tcp", 00:08:46.167 "traddr": "10.0.0.3", 00:08:46.167 "adrfam": "ipv4", 00:08:46.167 "trsvcid": "4420", 00:08:46.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:46.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:46.167 "hdgst": false, 00:08:46.167 "ddgst": false 00:08:46.167 }, 00:08:46.167 "method": "bdev_nvme_attach_controller" 00:08:46.167 }' 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:46.167 "params": { 00:08:46.167 "name": "Nvme1", 00:08:46.167 "trtype": "tcp", 00:08:46.167 "traddr": "10.0.0.3", 00:08:46.167 "adrfam": "ipv4", 00:08:46.167 "trsvcid": "4420", 00:08:46.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:46.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:46.167 "hdgst": false, 00:08:46.167 "ddgst": false 00:08:46.167 }, 00:08:46.167 "method": "bdev_nvme_attach_controller" 00:08:46.167 }' 00:08:46.167 [2024-12-11 13:49:39.158257] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:08:46.167 [2024-12-11 13:49:39.158357] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:46.167 [2024-12-11 13:49:39.163463] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:08:46.167 [2024-12-11 13:49:39.163541] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:46.167 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 65381 00:08:46.167 [2024-12-11 13:49:39.181448] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:08:46.167 [2024-12-11 13:49:39.181666] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:46.167 [2024-12-11 13:49:39.188276] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:08:46.167 [2024-12-11 13:49:39.188357] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:46.425 [2024-12-11 13:49:39.383445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.425 [2024-12-11 13:49:39.440615] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:08:46.425 [2024-12-11 13:49:39.451545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.425 [2024-12-11 13:49:39.454974] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.683 [2024-12-11 13:49:39.508053] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:08:46.683 [2024-12-11 13:49:39.521955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.683 [2024-12-11 13:49:39.538500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.683 [2024-12-11 13:49:39.592335] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:08:46.683 Running I/O for 1 seconds... 00:08:46.683 [2024-12-11 13:49:39.606171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.683 [2024-12-11 13:49:39.610430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.683 Running I/O for 1 seconds... 00:08:46.683 [2024-12-11 13:49:39.664928] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:08:46.683 [2024-12-11 13:49:39.678718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.941 Running I/O for 1 seconds... 00:08:46.941 Running I/O for 1 seconds... 00:08:47.875 6508.00 IOPS, 25.42 MiB/s 00:08:47.875 Latency(us) 00:08:47.875 [2024-12-11T13:49:40.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.875 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:47.875 Nvme1n1 : 1.03 6482.57 25.32 0.00 0.00 19415.73 5749.29 47662.55 00:08:47.875 [2024-12-11T13:49:40.922Z] =================================================================================================================== 00:08:47.875 [2024-12-11T13:49:40.922Z] Total : 6482.57 25.32 0.00 0.00 19415.73 5749.29 47662.55 00:08:47.875 7714.00 IOPS, 30.13 MiB/s 00:08:47.875 Latency(us) 00:08:47.875 [2024-12-11T13:49:40.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.875 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:47.875 Nvme1n1 : 1.01 7753.82 30.29 0.00 0.00 16406.81 10187.87 31933.91 00:08:47.875 [2024-12-11T13:49:40.922Z] =================================================================================================================== 00:08:47.875 [2024-12-11T13:49:40.922Z] Total : 7753.82 30.29 0.00 0.00 16406.81 10187.87 31933.91 00:08:47.875 166424.00 IOPS, 650.09 MiB/s 00:08:47.875 Latency(us) 00:08:47.875 [2024-12-11T13:49:40.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.875 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:47.875 Nvme1n1 : 1.00 166093.34 648.80 0.00 0.00 766.58 366.78 1980.97 00:08:47.875 [2024-12-11T13:49:40.922Z] =================================================================================================================== 00:08:47.875 [2024-12-11T13:49:40.922Z] Total : 166093.34 648.80 0.00 0.00 766.58 366.78 1980.97 00:08:47.875 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 65383 00:08:47.875 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 65385 00:08:47.875 6288.00 IOPS, 24.56 MiB/s 00:08:47.875 Latency(us) 00:08:47.875 [2024-12-11T13:49:40.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.875 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:47.875 Nvme1n1 : 1.01 6396.11 24.98 0.00 0.00 19929.48 6672.76 47662.55 00:08:47.875 [2024-12-11T13:49:40.922Z] =================================================================================================================== 00:08:47.875 [2024-12-11T13:49:40.922Z] Total : 6396.11 24.98 0.00 0.00 19929.48 6672.76 47662.55 00:08:47.875 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 65388 00:08:48.133 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:48.133 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.134 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:48.134 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.134 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:48.134 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:48.134 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:48.134 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:48.134 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:48.134 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:48.134 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.134 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:48.134 rmmod nvme_tcp 00:08:48.134 rmmod nvme_fabrics 00:08:48.134 rmmod nvme_keyring 00:08:48.134 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.134 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:48.134 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:48.134 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 65359 ']' 00:08:48.134 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 65359 00:08:48.134 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 65359 ']' 00:08:48.134 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 65359 00:08:48.134 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:48.134 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.134 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65359 00:08:48.134 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:48.134 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:48.134 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65359' 00:08:48.134 killing process with pid 65359 00:08:48.134 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 65359 00:08:48.134 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 65359 00:08:48.392 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:48.392 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:48.392 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:48.392 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:48.392 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:48.392 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:48.392 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:48.392 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:48.392 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:48.392 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:48.392 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:48.392 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:48.392 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:48.392 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:48.392 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:48.392 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:48.392 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:48.392 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:48.392 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:48.651 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:48.651 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:48.651 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:48.651 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:48.651 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.651 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.651 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.651 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:48.651 ************************************ 00:08:48.651 END TEST nvmf_bdev_io_wait 00:08:48.651 ************************************ 00:08:48.651 00:08:48.651 real 0m3.601s 00:08:48.651 user 0m14.259s 00:08:48.651 sys 0m2.247s 00:08:48.651 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.651 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:48.651 13:49:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:48.651 13:49:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:48.651 13:49:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.651 13:49:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:48.651 ************************************ 00:08:48.651 START TEST nvmf_queue_depth 00:08:48.651 ************************************ 00:08:48.651 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:48.651 * Looking for test storage... 00:08:48.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:48.651 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:48.651 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:48.651 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:48.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.911 --rc genhtml_branch_coverage=1 00:08:48.911 --rc genhtml_function_coverage=1 00:08:48.911 --rc genhtml_legend=1 00:08:48.911 --rc geninfo_all_blocks=1 00:08:48.911 --rc geninfo_unexecuted_blocks=1 00:08:48.911 00:08:48.911 ' 00:08:48.911 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:48.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.911 --rc genhtml_branch_coverage=1 00:08:48.911 --rc genhtml_function_coverage=1 00:08:48.911 --rc genhtml_legend=1 00:08:48.912 --rc geninfo_all_blocks=1 00:08:48.912 --rc geninfo_unexecuted_blocks=1 00:08:48.912 00:08:48.912 ' 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:48.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.912 --rc genhtml_branch_coverage=1 00:08:48.912 --rc genhtml_function_coverage=1 00:08:48.912 --rc genhtml_legend=1 00:08:48.912 --rc geninfo_all_blocks=1 00:08:48.912 --rc geninfo_unexecuted_blocks=1 00:08:48.912 00:08:48.912 ' 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:48.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.912 --rc genhtml_branch_coverage=1 00:08:48.912 --rc genhtml_function_coverage=1 00:08:48.912 --rc genhtml_legend=1 00:08:48.912 --rc geninfo_all_blocks=1 00:08:48.912 --rc geninfo_unexecuted_blocks=1 00:08:48.912 00:08:48.912 ' 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:48.912 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:48.912 Cannot find device "nvmf_init_br" 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:48.912 Cannot find device "nvmf_init_br2" 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:48.912 Cannot find device "nvmf_tgt_br" 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:48.912 Cannot find device "nvmf_tgt_br2" 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:48.912 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:48.912 Cannot find device "nvmf_init_br" 00:08:48.913 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:48.913 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:48.913 Cannot find device "nvmf_init_br2" 00:08:48.913 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:48.913 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:48.913 Cannot find device "nvmf_tgt_br" 00:08:48.913 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:48.913 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:48.913 Cannot find device "nvmf_tgt_br2" 00:08:48.913 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:48.913 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:48.913 Cannot find device "nvmf_br" 00:08:48.913 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:48.913 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:48.913 Cannot find device "nvmf_init_if" 00:08:48.913 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:48.913 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:48.913 Cannot find device "nvmf_init_if2" 00:08:48.913 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:48.913 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:48.913 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:48.913 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:48.913 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:48.913 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:48.913 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:48.913 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:48.913 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:49.171 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:49.171 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:49.171 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:49.171 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:49.171 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:49.172 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:49.172 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:08:49.172 00:08:49.172 --- 10.0.0.3 ping statistics --- 00:08:49.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.172 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:49.172 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:49.172 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:08:49.172 00:08:49.172 --- 10.0.0.4 ping statistics --- 00:08:49.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.172 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:49.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:08:49.172 00:08:49.172 --- 10.0.0.1 ping statistics --- 00:08:49.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.172 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:49.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:08:49.172 00:08:49.172 --- 10.0.0.2 ping statistics --- 00:08:49.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.172 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:49.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:49.431 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:49.431 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:49.431 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.431 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.431 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=65647 00:08:49.431 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 65647 00:08:49.431 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 65647 ']' 00:08:49.431 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.431 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:49.431 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.431 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.431 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.431 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.431 [2024-12-11 13:49:42.301336] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:08:49.431 [2024-12-11 13:49:42.301435] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.431 [2024-12-11 13:49:42.458962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.689 [2024-12-11 13:49:42.520396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.689 [2024-12-11 13:49:42.520458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.689 [2024-12-11 13:49:42.520473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.689 [2024-12-11 13:49:42.520484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.689 [2024-12-11 13:49:42.520493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.689 [2024-12-11 13:49:42.521001] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.689 [2024-12-11 13:49:42.579176] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.689 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.689 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:49.689 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:49.689 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:49.689 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.689 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.689 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:49.689 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.689 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.689 [2024-12-11 13:49:42.696702] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.689 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.689 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:49.689 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.689 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.689 Malloc0 00:08:49.689 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.689 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:49.689 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.689 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.948 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.948 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:49.948 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.948 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.948 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.948 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:49.948 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.948 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.948 [2024-12-11 13:49:42.752858] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:49.948 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.948 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=65677 00:08:49.948 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:49.948 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:49.948 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 65677 /var/tmp/bdevperf.sock 00:08:49.948 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 65677 ']' 00:08:49.948 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:49.948 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:49.948 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:49.948 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.948 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.948 [2024-12-11 13:49:42.814883] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:08:49.948 [2024-12-11 13:49:42.814986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65677 ] 00:08:49.948 [2024-12-11 13:49:42.967645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.207 [2024-12-11 13:49:43.035254] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.207 [2024-12-11 13:49:43.093866] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:50.207 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.207 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:50.207 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:50.207 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.207 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:50.207 NVMe0n1 00:08:50.207 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.207 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:50.466 Running I/O for 10 seconds... 00:08:52.336 7016.00 IOPS, 27.41 MiB/s [2024-12-11T13:49:46.759Z] 7211.50 IOPS, 28.17 MiB/s [2024-12-11T13:49:47.385Z] 7524.00 IOPS, 29.39 MiB/s [2024-12-11T13:49:48.760Z] 7605.75 IOPS, 29.71 MiB/s [2024-12-11T13:49:49.695Z] 7636.60 IOPS, 29.83 MiB/s [2024-12-11T13:49:50.629Z] 7683.33 IOPS, 30.01 MiB/s [2024-12-11T13:49:51.562Z] 7736.86 IOPS, 30.22 MiB/s [2024-12-11T13:49:52.496Z] 7828.75 IOPS, 30.58 MiB/s [2024-12-11T13:49:53.432Z] 7957.00 IOPS, 31.08 MiB/s [2024-12-11T13:49:53.432Z] 8038.40 IOPS, 31.40 MiB/s 00:09:00.385 Latency(us) 00:09:00.385 [2024-12-11T13:49:53.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.385 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:00.385 Verification LBA range: start 0x0 length 0x4000 00:09:00.385 NVMe0n1 : 10.08 8079.51 31.56 0.00 0.00 126100.20 20733.21 98184.84 00:09:00.385 [2024-12-11T13:49:53.432Z] =================================================================================================================== 00:09:00.385 [2024-12-11T13:49:53.432Z] Total : 8079.51 31.56 0.00 0.00 126100.20 20733.21 98184.84 00:09:00.385 { 00:09:00.385 "results": [ 00:09:00.385 { 00:09:00.385 "job": "NVMe0n1", 00:09:00.385 "core_mask": "0x1", 00:09:00.385 "workload": "verify", 00:09:00.385 "status": "finished", 00:09:00.385 "verify_range": { 00:09:00.385 "start": 0, 00:09:00.385 "length": 16384 00:09:00.385 }, 00:09:00.385 "queue_depth": 1024, 00:09:00.385 "io_size": 4096, 00:09:00.385 "runtime": 10.075863, 00:09:00.385 "iops": 8079.5064402920125, 00:09:00.385 "mibps": 31.560572032390674, 00:09:00.385 "io_failed": 0, 00:09:00.385 "io_timeout": 0, 00:09:00.385 "avg_latency_us": 126100.19604345341, 00:09:00.385 "min_latency_us": 20733.20727272727, 00:09:00.385 "max_latency_us": 98184.84363636364 00:09:00.385 } 00:09:00.385 ], 00:09:00.385 "core_count": 1 00:09:00.385 } 00:09:00.644 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 65677 00:09:00.644 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 65677 ']' 00:09:00.644 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 65677 00:09:00.644 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:00.644 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.644 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65677 00:09:00.644 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.644 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.644 killing process with pid 65677 00:09:00.644 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65677' 00:09:00.644 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 65677 00:09:00.644 Received shutdown signal, test time was about 10.000000 seconds 00:09:00.644 00:09:00.644 Latency(us) 00:09:00.644 [2024-12-11T13:49:53.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.644 [2024-12-11T13:49:53.691Z] =================================================================================================================== 00:09:00.644 [2024-12-11T13:49:53.691Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:00.644 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 65677 00:09:00.644 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:00.644 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:00.644 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:00.644 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:00.902 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:00.902 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:00.902 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:00.902 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:00.902 rmmod nvme_tcp 00:09:00.902 rmmod nvme_fabrics 00:09:00.902 rmmod nvme_keyring 00:09:00.902 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:00.902 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:00.902 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:00.902 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 65647 ']' 00:09:00.902 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 65647 00:09:00.902 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 65647 ']' 00:09:00.902 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 65647 00:09:00.902 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:00.902 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.902 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65647 00:09:00.902 killing process with pid 65647 00:09:00.902 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:00.902 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:00.902 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65647' 00:09:00.902 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 65647 00:09:00.902 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 65647 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:01.160 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:01.418 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:01.418 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.418 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.418 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.418 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:01.418 00:09:01.418 real 0m12.659s 00:09:01.418 user 0m21.421s 00:09:01.418 sys 0m2.206s 00:09:01.418 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.418 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.418 ************************************ 00:09:01.418 END TEST nvmf_queue_depth 00:09:01.418 ************************************ 00:09:01.418 13:49:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:01.418 13:49:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:01.418 13:49:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.418 13:49:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.418 ************************************ 00:09:01.418 START TEST nvmf_target_multipath 00:09:01.419 ************************************ 00:09:01.419 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:01.419 * Looking for test storage... 00:09:01.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:01.419 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:01.419 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:01.419 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:01.678 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:01.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.679 --rc genhtml_branch_coverage=1 00:09:01.679 --rc genhtml_function_coverage=1 00:09:01.679 --rc genhtml_legend=1 00:09:01.679 --rc geninfo_all_blocks=1 00:09:01.679 --rc geninfo_unexecuted_blocks=1 00:09:01.679 00:09:01.679 ' 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:01.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.679 --rc genhtml_branch_coverage=1 00:09:01.679 --rc genhtml_function_coverage=1 00:09:01.679 --rc genhtml_legend=1 00:09:01.679 --rc geninfo_all_blocks=1 00:09:01.679 --rc geninfo_unexecuted_blocks=1 00:09:01.679 00:09:01.679 ' 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:01.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.679 --rc genhtml_branch_coverage=1 00:09:01.679 --rc genhtml_function_coverage=1 00:09:01.679 --rc genhtml_legend=1 00:09:01.679 --rc geninfo_all_blocks=1 00:09:01.679 --rc geninfo_unexecuted_blocks=1 00:09:01.679 00:09:01.679 ' 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:01.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.679 --rc genhtml_branch_coverage=1 00:09:01.679 --rc genhtml_function_coverage=1 00:09:01.679 --rc genhtml_legend=1 00:09:01.679 --rc geninfo_all_blocks=1 00:09:01.679 --rc geninfo_unexecuted_blocks=1 00:09:01.679 00:09:01.679 ' 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:01.679 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:01.679 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:01.680 Cannot find device "nvmf_init_br" 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:01.680 Cannot find device "nvmf_init_br2" 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:01.680 Cannot find device "nvmf_tgt_br" 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:01.680 Cannot find device "nvmf_tgt_br2" 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:01.680 Cannot find device "nvmf_init_br" 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:01.680 Cannot find device "nvmf_init_br2" 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:01.680 Cannot find device "nvmf_tgt_br" 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:01.680 Cannot find device "nvmf_tgt_br2" 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:01.680 Cannot find device "nvmf_br" 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:01.680 Cannot find device "nvmf_init_if" 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:01.680 Cannot find device "nvmf_init_if2" 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:01.680 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:01.680 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:01.680 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:01.939 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:01.939 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:09:01.939 00:09:01.939 --- 10.0.0.3 ping statistics --- 00:09:01.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.939 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:01.939 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:01.939 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:09:01.939 00:09:01.939 --- 10.0.0.4 ping statistics --- 00:09:01.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.939 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:01.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:09:01.939 00:09:01.939 --- 10.0.0.1 ping statistics --- 00:09:01.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.939 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:01.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:09:01.939 00:09:01.939 --- 10.0.0.2 ping statistics --- 00:09:01.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.939 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=66042 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 66042 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 66042 ']' 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.939 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:02.198 [2024-12-11 13:49:55.018260] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:09:02.198 [2024-12-11 13:49:55.018355] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.198 [2024-12-11 13:49:55.164197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.198 [2024-12-11 13:49:55.225146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.198 [2024-12-11 13:49:55.225513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.198 [2024-12-11 13:49:55.225689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.198 [2024-12-11 13:49:55.225863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.198 [2024-12-11 13:49:55.225907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.198 [2024-12-11 13:49:55.227334] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.198 [2024-12-11 13:49:55.227438] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.198 [2024-12-11 13:49:55.230735] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.198 [2024-12-11 13:49:55.230781] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.456 [2024-12-11 13:49:55.285441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:02.456 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.456 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:09:02.456 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:02.456 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:02.456 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:02.456 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.456 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:02.714 [2024-12-11 13:49:55.623181] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.714 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:02.972 Malloc0 00:09:02.972 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:03.230 13:49:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:03.489 13:49:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:03.747 [2024-12-11 13:49:56.697324] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:03.747 13:49:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:04.006 [2024-12-11 13:49:56.993652] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:04.006 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:04.264 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:04.264 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:04.264 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:09:04.264 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:04.264 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:04.264 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=66124 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:06.799 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:06.799 [global] 00:09:06.799 thread=1 00:09:06.799 invalidate=1 00:09:06.799 rw=randrw 00:09:06.799 time_based=1 00:09:06.799 runtime=6 00:09:06.799 ioengine=libaio 00:09:06.799 direct=1 00:09:06.799 bs=4096 00:09:06.799 iodepth=128 00:09:06.799 norandommap=0 00:09:06.799 numjobs=1 00:09:06.799 00:09:06.799 verify_dump=1 00:09:06.799 verify_backlog=512 00:09:06.799 verify_state_save=0 00:09:06.799 do_verify=1 00:09:06.799 verify=crc32c-intel 00:09:06.799 [job0] 00:09:06.799 filename=/dev/nvme0n1 00:09:06.799 Could not set queue depth (nvme0n1) 00:09:06.799 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:06.799 fio-3.35 00:09:06.799 Starting 1 thread 00:09:07.377 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:07.635 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:07.894 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:07.894 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:07.894 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:07.894 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:07.894 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:07.894 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:07.894 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:07.894 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:07.894 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:07.894 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:07.894 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:07.894 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:07.894 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:08.152 13:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:08.719 13:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:08.719 13:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:08.719 13:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:08.719 13:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:08.719 13:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:08.719 13:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:08.719 13:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:08.719 13:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:08.719 13:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:08.719 13:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:08.719 13:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:08.719 13:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:08.719 13:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 66124 00:09:12.909 00:09:12.909 job0: (groupid=0, jobs=1): err= 0: pid=66145: Wed Dec 11 13:50:05 2024 00:09:12.909 read: IOPS=10.1k, BW=39.5MiB/s (41.5MB/s)(237MiB/6007msec) 00:09:12.909 slat (usec): min=4, max=7849, avg=57.88, stdev=235.02 00:09:12.909 clat (usec): min=1545, max=19128, avg=8620.02, stdev=1630.96 00:09:12.909 lat (usec): min=1554, max=19138, avg=8677.90, stdev=1636.87 00:09:12.909 clat percentiles (usec): 00:09:12.909 | 1.00th=[ 4555], 5.00th=[ 6587], 10.00th=[ 7242], 20.00th=[ 7701], 00:09:12.909 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8356], 60.00th=[ 8586], 00:09:12.909 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[10552], 95.00th=[12256], 00:09:12.909 | 99.00th=[13566], 99.50th=[14746], 99.90th=[16909], 99.95th=[17957], 00:09:12.909 | 99.99th=[18744] 00:09:12.909 bw ( KiB/s): min= 1968, max=26600, per=52.42%, avg=21219.33, stdev=6566.02, samples=12 00:09:12.909 iops : min= 492, max= 6650, avg=5304.83, stdev=1641.50, samples=12 00:09:12.909 write: IOPS=5988, BW=23.4MiB/s (24.5MB/s)(125MiB/5327msec); 0 zone resets 00:09:12.909 slat (usec): min=15, max=2657, avg=67.69, stdev=169.86 00:09:12.909 clat (usec): min=1507, max=18074, avg=7464.29, stdev=1457.82 00:09:12.909 lat (usec): min=1537, max=18104, avg=7531.98, stdev=1464.31 00:09:12.909 clat percentiles (usec): 00:09:12.909 | 1.00th=[ 3425], 5.00th=[ 4490], 10.00th=[ 5669], 20.00th=[ 6849], 00:09:12.909 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7767], 00:09:12.909 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8586], 95.00th=[ 9372], 00:09:12.909 | 99.00th=[11863], 99.50th=[12911], 99.90th=[16188], 99.95th=[16909], 00:09:12.909 | 99.99th=[17957] 00:09:12.909 bw ( KiB/s): min= 2240, max=27184, per=88.61%, avg=21225.33, stdev=6497.44, samples=12 00:09:12.909 iops : min= 560, max= 6796, avg=5306.33, stdev=1624.36, samples=12 00:09:12.909 lat (msec) : 2=0.02%, 4=1.30%, 10=89.33%, 20=9.34% 00:09:12.909 cpu : usr=5.73%, sys=20.18%, ctx=5315, majf=0, minf=127 00:09:12.909 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:12.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:12.909 issued rwts: total=60791,31901,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.909 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:12.909 00:09:12.909 Run status group 0 (all jobs): 00:09:12.909 READ: bw=39.5MiB/s (41.5MB/s), 39.5MiB/s-39.5MiB/s (41.5MB/s-41.5MB/s), io=237MiB (249MB), run=6007-6007msec 00:09:12.909 WRITE: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=125MiB (131MB), run=5327-5327msec 00:09:12.909 00:09:12.909 Disk stats (read/write): 00:09:12.909 nvme0n1: ios=60172/31038, merge=0/0, ticks=498327/217659, in_queue=715986, util=98.60% 00:09:12.909 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:12.909 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:13.168 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:13.168 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:13.168 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:13.168 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:13.168 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:13.168 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:13.168 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:13.168 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:13.168 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:13.168 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:13.168 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:13.168 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:13.168 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:13.168 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:13.168 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=66232 00:09:13.168 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:13.168 [global] 00:09:13.168 thread=1 00:09:13.168 invalidate=1 00:09:13.168 rw=randrw 00:09:13.168 time_based=1 00:09:13.168 runtime=6 00:09:13.168 ioengine=libaio 00:09:13.168 direct=1 00:09:13.168 bs=4096 00:09:13.168 iodepth=128 00:09:13.168 norandommap=0 00:09:13.168 numjobs=1 00:09:13.168 00:09:13.168 verify_dump=1 00:09:13.168 verify_backlog=512 00:09:13.168 verify_state_save=0 00:09:13.168 do_verify=1 00:09:13.168 verify=crc32c-intel 00:09:13.168 [job0] 00:09:13.168 filename=/dev/nvme0n1 00:09:13.425 Could not set queue depth (nvme0n1) 00:09:13.425 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:13.425 fio-3.35 00:09:13.425 Starting 1 thread 00:09:14.386 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:14.645 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:14.903 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:14.903 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:14.903 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:14.903 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:14.903 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:14.903 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:14.903 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:14.903 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:14.903 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:14.903 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:14.903 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:14.903 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:14.903 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:15.162 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:15.420 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:15.420 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:15.420 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:15.420 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:15.420 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:15.420 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:15.420 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:15.420 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:15.420 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:15.420 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:15.420 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:15.420 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:15.420 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 66232 00:09:19.619 00:09:19.619 job0: (groupid=0, jobs=1): err= 0: pid=66253: Wed Dec 11 13:50:12 2024 00:09:19.619 read: IOPS=10.9k, BW=42.4MiB/s (44.5MB/s)(255MiB/6007msec) 00:09:19.619 slat (usec): min=2, max=6333, avg=46.73, stdev=216.53 00:09:19.619 clat (usec): min=310, max=19333, avg=8067.71, stdev=2317.08 00:09:19.619 lat (usec): min=320, max=19509, avg=8114.44, stdev=2334.40 00:09:19.619 clat percentiles (usec): 00:09:19.619 | 1.00th=[ 2180], 5.00th=[ 4178], 10.00th=[ 4948], 20.00th=[ 6194], 00:09:19.619 | 30.00th=[ 7308], 40.00th=[ 7898], 50.00th=[ 8291], 60.00th=[ 8586], 00:09:19.619 | 70.00th=[ 8979], 80.00th=[ 9503], 90.00th=[10421], 95.00th=[12125], 00:09:19.619 | 99.00th=[14484], 99.50th=[15270], 99.90th=[18220], 99.95th=[18220], 00:09:19.619 | 99.99th=[18744] 00:09:19.619 bw ( KiB/s): min=11248, max=35760, per=52.29%, avg=22705.82, stdev=7603.23, samples=11 00:09:19.619 iops : min= 2812, max= 8940, avg=5676.45, stdev=1900.81, samples=11 00:09:19.619 write: IOPS=6221, BW=24.3MiB/s (25.5MB/s)(134MiB/5496msec); 0 zone resets 00:09:19.619 slat (usec): min=3, max=1946, avg=56.81, stdev=155.01 00:09:19.619 clat (usec): min=751, max=18647, avg=6923.30, stdev=2130.84 00:09:19.619 lat (usec): min=775, max=18676, avg=6980.11, stdev=2147.50 00:09:19.619 clat percentiles (usec): 00:09:19.619 | 1.00th=[ 2442], 5.00th=[ 3326], 10.00th=[ 3884], 20.00th=[ 4621], 00:09:19.619 | 30.00th=[ 5735], 40.00th=[ 6980], 50.00th=[ 7439], 60.00th=[ 7767], 00:09:19.619 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[ 9110], 95.00th=[ 9765], 00:09:19.619 | 99.00th=[12256], 99.50th=[13304], 99.90th=[15533], 99.95th=[16057], 00:09:19.619 | 99.99th=[16712] 00:09:19.619 bw ( KiB/s): min=11880, max=34810, per=91.22%, avg=22700.55, stdev=7441.01, samples=11 00:09:19.619 iops : min= 2970, max= 8702, avg=5675.09, stdev=1860.17, samples=11 00:09:19.619 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:09:19.619 lat (msec) : 2=0.64%, 4=6.15%, 10=83.26%, 20=9.92% 00:09:19.619 cpu : usr=5.64%, sys=19.93%, ctx=5616, majf=0, minf=78 00:09:19.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:19.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:19.619 issued rwts: total=65205,34192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.619 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:19.619 00:09:19.619 Run status group 0 (all jobs): 00:09:19.619 READ: bw=42.4MiB/s (44.5MB/s), 42.4MiB/s-42.4MiB/s (44.5MB/s-44.5MB/s), io=255MiB (267MB), run=6007-6007msec 00:09:19.619 WRITE: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=134MiB (140MB), run=5496-5496msec 00:09:19.619 00:09:19.619 Disk stats (read/write): 00:09:19.619 nvme0n1: ios=64411/33698, merge=0/0, ticks=496469/219174, in_queue=715643, util=98.68% 00:09:19.619 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:19.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:19.619 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:19.619 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:09:19.619 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:19.619 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.619 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:19.620 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.620 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:09:19.620 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.878 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:19.878 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:19.878 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:19.878 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:19.878 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:19.878 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:20.137 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:20.137 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:20.137 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:20.137 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:20.137 rmmod nvme_tcp 00:09:20.137 rmmod nvme_fabrics 00:09:20.137 rmmod nvme_keyring 00:09:20.137 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:20.137 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:20.137 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:20.137 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 66042 ']' 00:09:20.137 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 66042 00:09:20.137 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 66042 ']' 00:09:20.137 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 66042 00:09:20.137 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:09:20.137 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.137 13:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66042 00:09:20.137 killing process with pid 66042 00:09:20.137 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.137 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.137 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66042' 00:09:20.137 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 66042 00:09:20.137 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 66042 00:09:20.400 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:20.400 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:20.400 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:20.400 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:20.400 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:20.400 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:20.400 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:20.400 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:20.400 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:20.400 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:20.400 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:20.400 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:20.400 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:20.400 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:20.400 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:20.400 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:20.400 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:20.400 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:20.400 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:20.400 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:20.400 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:20.690 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:20.690 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:20.690 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.690 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.690 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.690 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:09:20.690 ************************************ 00:09:20.690 END TEST nvmf_target_multipath 00:09:20.690 ************************************ 00:09:20.690 00:09:20.690 real 0m19.212s 00:09:20.690 user 1m11.033s 00:09:20.690 sys 0m9.237s 00:09:20.690 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.690 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:20.690 13:50:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:20.690 13:50:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:20.690 13:50:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.690 13:50:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:20.690 ************************************ 00:09:20.690 START TEST nvmf_zcopy 00:09:20.690 ************************************ 00:09:20.690 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:20.690 * Looking for test storage... 00:09:20.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:20.690 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:20.690 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:20.690 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:20.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.949 --rc genhtml_branch_coverage=1 00:09:20.949 --rc genhtml_function_coverage=1 00:09:20.949 --rc genhtml_legend=1 00:09:20.949 --rc geninfo_all_blocks=1 00:09:20.949 --rc geninfo_unexecuted_blocks=1 00:09:20.949 00:09:20.949 ' 00:09:20.949 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:20.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.949 --rc genhtml_branch_coverage=1 00:09:20.949 --rc genhtml_function_coverage=1 00:09:20.949 --rc genhtml_legend=1 00:09:20.949 --rc geninfo_all_blocks=1 00:09:20.949 --rc geninfo_unexecuted_blocks=1 00:09:20.949 00:09:20.950 ' 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:20.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.950 --rc genhtml_branch_coverage=1 00:09:20.950 --rc genhtml_function_coverage=1 00:09:20.950 --rc genhtml_legend=1 00:09:20.950 --rc geninfo_all_blocks=1 00:09:20.950 --rc geninfo_unexecuted_blocks=1 00:09:20.950 00:09:20.950 ' 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:20.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.950 --rc genhtml_branch_coverage=1 00:09:20.950 --rc genhtml_function_coverage=1 00:09:20.950 --rc genhtml_legend=1 00:09:20.950 --rc geninfo_all_blocks=1 00:09:20.950 --rc geninfo_unexecuted_blocks=1 00:09:20.950 00:09:20.950 ' 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:20.950 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:20.950 Cannot find device "nvmf_init_br" 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:20.950 Cannot find device "nvmf_init_br2" 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:20.950 Cannot find device "nvmf_tgt_br" 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:20.950 Cannot find device "nvmf_tgt_br2" 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:20.950 Cannot find device "nvmf_init_br" 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:20.950 Cannot find device "nvmf_init_br2" 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:20.950 Cannot find device "nvmf_tgt_br" 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:20.950 Cannot find device "nvmf_tgt_br2" 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:20.950 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:20.951 Cannot find device "nvmf_br" 00:09:20.951 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:20.951 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:20.951 Cannot find device "nvmf_init_if" 00:09:20.951 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:09:20.951 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:20.951 Cannot find device "nvmf_init_if2" 00:09:20.951 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:09:20.951 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:20.951 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:20.951 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:09:20.951 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:20.951 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:20.951 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:09:20.951 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:20.951 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:20.951 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:20.951 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:20.951 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:21.210 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:21.210 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:09:21.210 00:09:21.210 --- 10.0.0.3 ping statistics --- 00:09:21.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.210 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:21.210 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:21.210 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:09:21.210 00:09:21.210 --- 10.0.0.4 ping statistics --- 00:09:21.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.210 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:21.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:21.210 00:09:21.210 --- 10.0.0.1 ping statistics --- 00:09:21.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.210 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:21.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:09:21.210 00:09:21.210 --- 10.0.0.2 ping statistics --- 00:09:21.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.210 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:21.210 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.211 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:21.211 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:21.211 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.211 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:21.211 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:21.211 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:21.211 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:21.211 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:21.211 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:21.211 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=66558 00:09:21.211 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:21.211 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 66558 00:09:21.211 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 66558 ']' 00:09:21.211 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.211 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.211 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.211 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.211 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:21.469 [2024-12-11 13:50:14.306888] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:09:21.469 [2024-12-11 13:50:14.307165] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.469 [2024-12-11 13:50:14.461739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.728 [2024-12-11 13:50:14.520395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.728 [2024-12-11 13:50:14.520457] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.728 [2024-12-11 13:50:14.520496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.728 [2024-12-11 13:50:14.520506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.728 [2024-12-11 13:50:14.520516] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.728 [2024-12-11 13:50:14.521018] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.728 [2024-12-11 13:50:14.581975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:21.728 [2024-12-11 13:50:14.704747] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:21.728 [2024-12-11 13:50:14.720885] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:21.728 malloc0 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:21.728 { 00:09:21.728 "params": { 00:09:21.728 "name": "Nvme$subsystem", 00:09:21.728 "trtype": "$TEST_TRANSPORT", 00:09:21.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:21.728 "adrfam": "ipv4", 00:09:21.728 "trsvcid": "$NVMF_PORT", 00:09:21.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:21.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:21.728 "hdgst": ${hdgst:-false}, 00:09:21.728 "ddgst": ${ddgst:-false} 00:09:21.728 }, 00:09:21.728 "method": "bdev_nvme_attach_controller" 00:09:21.728 } 00:09:21.728 EOF 00:09:21.728 )") 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:21.728 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:21.987 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:21.987 13:50:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:21.987 "params": { 00:09:21.987 "name": "Nvme1", 00:09:21.987 "trtype": "tcp", 00:09:21.987 "traddr": "10.0.0.3", 00:09:21.987 "adrfam": "ipv4", 00:09:21.987 "trsvcid": "4420", 00:09:21.987 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:21.987 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:21.987 "hdgst": false, 00:09:21.987 "ddgst": false 00:09:21.987 }, 00:09:21.987 "method": "bdev_nvme_attach_controller" 00:09:21.987 }' 00:09:21.987 [2024-12-11 13:50:14.822370] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:09:21.987 [2024-12-11 13:50:14.822462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66583 ] 00:09:21.987 [2024-12-11 13:50:14.977796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.244 [2024-12-11 13:50:15.039904] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.244 [2024-12-11 13:50:15.109021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:22.244 Running I/O for 10 seconds... 00:09:24.555 5934.00 IOPS, 46.36 MiB/s [2024-12-11T13:50:18.537Z] 5976.50 IOPS, 46.69 MiB/s [2024-12-11T13:50:19.482Z] 5975.00 IOPS, 46.68 MiB/s [2024-12-11T13:50:20.432Z] 5962.50 IOPS, 46.58 MiB/s [2024-12-11T13:50:21.368Z] 5973.20 IOPS, 46.67 MiB/s [2024-12-11T13:50:22.302Z] 5960.17 IOPS, 46.56 MiB/s [2024-12-11T13:50:23.238Z] 5937.57 IOPS, 46.39 MiB/s [2024-12-11T13:50:24.614Z] 5922.50 IOPS, 46.27 MiB/s [2024-12-11T13:50:25.550Z] 5952.78 IOPS, 46.51 MiB/s [2024-12-11T13:50:25.550Z] 5984.00 IOPS, 46.75 MiB/s 00:09:32.503 Latency(us) 00:09:32.503 [2024-12-11T13:50:25.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.503 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:32.503 Verification LBA range: start 0x0 length 0x1000 00:09:32.503 Nvme1n1 : 10.01 5988.94 46.79 0.00 0.00 21304.01 2770.39 38606.66 00:09:32.503 [2024-12-11T13:50:25.550Z] =================================================================================================================== 00:09:32.503 [2024-12-11T13:50:25.550Z] Total : 5988.94 46.79 0.00 0.00 21304.01 2770.39 38606.66 00:09:32.503 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=66706 00:09:32.503 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:32.503 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:32.503 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:32.503 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:32.503 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.503 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:32.503 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:32.503 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:32.503 { 00:09:32.503 "params": { 00:09:32.503 "name": "Nvme$subsystem", 00:09:32.503 "trtype": "$TEST_TRANSPORT", 00:09:32.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.503 "adrfam": "ipv4", 00:09:32.503 "trsvcid": "$NVMF_PORT", 00:09:32.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.503 "hdgst": ${hdgst:-false}, 00:09:32.503 "ddgst": ${ddgst:-false} 00:09:32.503 }, 00:09:32.503 "method": "bdev_nvme_attach_controller" 00:09:32.503 } 00:09:32.503 EOF 00:09:32.503 )") 00:09:32.503 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:32.503 [2024-12-11 13:50:25.452121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.503 [2024-12-11 13:50:25.452163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.503 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:32.503 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:32.503 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:32.503 "params": { 00:09:32.503 "name": "Nvme1", 00:09:32.503 "trtype": "tcp", 00:09:32.503 "traddr": "10.0.0.3", 00:09:32.503 "adrfam": "ipv4", 00:09:32.503 "trsvcid": "4420", 00:09:32.503 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.503 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:32.503 "hdgst": false, 00:09:32.503 "ddgst": false 00:09:32.503 }, 00:09:32.503 "method": "bdev_nvme_attach_controller" 00:09:32.503 }' 00:09:32.503 [2024-12-11 13:50:25.464037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.503 [2024-12-11 13:50:25.464127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.503 [2024-12-11 13:50:25.472031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.503 [2024-12-11 13:50:25.472103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.503 [2024-12-11 13:50:25.480028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.503 [2024-12-11 13:50:25.480054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.503 [2024-12-11 13:50:25.492052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.503 [2024-12-11 13:50:25.492080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.503 [2024-12-11 13:50:25.493991] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:09:32.503 [2024-12-11 13:50:25.494069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66706 ] 00:09:32.503 [2024-12-11 13:50:25.504034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.503 [2024-12-11 13:50:25.504262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.503 [2024-12-11 13:50:25.516052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.503 [2024-12-11 13:50:25.516243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.503 [2024-12-11 13:50:25.528053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.503 [2024-12-11 13:50:25.528254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.503 [2024-12-11 13:50:25.540097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.503 [2024-12-11 13:50:25.540289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.772 [2024-12-11 13:50:25.552063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.772 [2024-12-11 13:50:25.552232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.772 [2024-12-11 13:50:25.564087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.772 [2024-12-11 13:50:25.564263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.772 [2024-12-11 13:50:25.576085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.772 [2024-12-11 13:50:25.576258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.772 [2024-12-11 13:50:25.588090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.772 [2024-12-11 13:50:25.588263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.772 [2024-12-11 13:50:25.600108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.772 [2024-12-11 13:50:25.600284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.772 [2024-12-11 13:50:25.612097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.772 [2024-12-11 13:50:25.612241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.773 [2024-12-11 13:50:25.624097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.773 [2024-12-11 13:50:25.624126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.773 [2024-12-11 13:50:25.636098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.773 [2024-12-11 13:50:25.636128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.773 [2024-12-11 13:50:25.637053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.773 [2024-12-11 13:50:25.648111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.773 [2024-12-11 13:50:25.648142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.773 [2024-12-11 13:50:25.660127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.773 [2024-12-11 13:50:25.660158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.773 [2024-12-11 13:50:25.672125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.773 [2024-12-11 13:50:25.672154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.773 [2024-12-11 13:50:25.684123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.773 [2024-12-11 13:50:25.684157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.773 [2024-12-11 13:50:25.696127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.773 [2024-12-11 13:50:25.696161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.773 [2024-12-11 13:50:25.698917] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.773 [2024-12-11 13:50:25.708112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.773 [2024-12-11 13:50:25.708140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.773 [2024-12-11 13:50:25.720144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.773 [2024-12-11 13:50:25.720176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.773 [2024-12-11 13:50:25.732133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.773 [2024-12-11 13:50:25.732165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.773 [2024-12-11 13:50:25.744139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.773 [2024-12-11 13:50:25.744175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.773 [2024-12-11 13:50:25.756146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.773 [2024-12-11 13:50:25.756182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.773 [2024-12-11 13:50:25.764434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:32.773 [2024-12-11 13:50:25.768137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.773 [2024-12-11 13:50:25.768315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.773 [2024-12-11 13:50:25.780168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.773 [2024-12-11 13:50:25.780212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.773 [2024-12-11 13:50:25.792154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.773 [2024-12-11 13:50:25.792194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.773 [2024-12-11 13:50:25.804160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.773 [2024-12-11 13:50:25.804189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.046 [2024-12-11 13:50:25.816132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.046 [2024-12-11 13:50:25.816164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.046 [2024-12-11 13:50:25.828160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.046 [2024-12-11 13:50:25.828199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.046 [2024-12-11 13:50:25.840167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.046 [2024-12-11 13:50:25.840201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.046 [2024-12-11 13:50:25.852184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.046 [2024-12-11 13:50:25.852220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.046 [2024-12-11 13:50:25.864204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.046 [2024-12-11 13:50:25.864250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.046 [2024-12-11 13:50:25.876210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.046 [2024-12-11 13:50:25.876244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.046 [2024-12-11 13:50:25.888216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.046 Running I/O for 5 seconds... 00:09:33.046 [2024-12-11 13:50:25.888382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.046 [2024-12-11 13:50:25.906542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.046 [2024-12-11 13:50:25.906747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.046 [2024-12-11 13:50:25.921835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.046 [2024-12-11 13:50:25.922029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.046 [2024-12-11 13:50:25.931340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.046 [2024-12-11 13:50:25.931559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.046 [2024-12-11 13:50:25.947215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.046 [2024-12-11 13:50:25.947407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.046 [2024-12-11 13:50:25.964679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.046 [2024-12-11 13:50:25.964929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.046 [2024-12-11 13:50:25.979428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.046 [2024-12-11 13:50:25.979652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.046 [2024-12-11 13:50:25.995266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.046 [2024-12-11 13:50:25.995477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.046 [2024-12-11 13:50:26.012959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.046 [2024-12-11 13:50:26.013106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.046 [2024-12-11 13:50:26.028941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.046 [2024-12-11 13:50:26.029115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.046 [2024-12-11 13:50:26.045990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.046 [2024-12-11 13:50:26.046140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.046 [2024-12-11 13:50:26.062396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.046 [2024-12-11 13:50:26.062579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.046 [2024-12-11 13:50:26.079949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.046 [2024-12-11 13:50:26.080198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.305 [2024-12-11 13:50:26.094697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.305 [2024-12-11 13:50:26.094922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.305 [2024-12-11 13:50:26.110187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.305 [2024-12-11 13:50:26.110400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.305 [2024-12-11 13:50:26.119588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.305 [2024-12-11 13:50:26.119808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.305 [2024-12-11 13:50:26.134767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.305 [2024-12-11 13:50:26.134955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.305 [2024-12-11 13:50:26.150444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.305 [2024-12-11 13:50:26.150630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.305 [2024-12-11 13:50:26.159903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.305 [2024-12-11 13:50:26.160080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.305 [2024-12-11 13:50:26.175016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.305 [2024-12-11 13:50:26.175253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.305 [2024-12-11 13:50:26.190139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.305 [2024-12-11 13:50:26.190176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.305 [2024-12-11 13:50:26.206928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.305 [2024-12-11 13:50:26.206963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.305 [2024-12-11 13:50:26.221805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.305 [2024-12-11 13:50:26.221839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.305 [2024-12-11 13:50:26.237825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.305 [2024-12-11 13:50:26.237859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.305 [2024-12-11 13:50:26.254345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.305 [2024-12-11 13:50:26.254379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.306 [2024-12-11 13:50:26.270800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.306 [2024-12-11 13:50:26.270836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.306 [2024-12-11 13:50:26.288307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.306 [2024-12-11 13:50:26.288342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.306 [2024-12-11 13:50:26.304061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.306 [2024-12-11 13:50:26.304112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.306 [2024-12-11 13:50:26.321652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.306 [2024-12-11 13:50:26.321688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.306 [2024-12-11 13:50:26.337586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.306 [2024-12-11 13:50:26.337632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.564 [2024-12-11 13:50:26.354670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.564 [2024-12-11 13:50:26.354927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.564 [2024-12-11 13:50:26.369895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.564 [2024-12-11 13:50:26.369929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.565 [2024-12-11 13:50:26.386561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.565 [2024-12-11 13:50:26.386597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.565 [2024-12-11 13:50:26.403002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.565 [2024-12-11 13:50:26.403039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.565 [2024-12-11 13:50:26.419469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.565 [2024-12-11 13:50:26.419506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.565 [2024-12-11 13:50:26.435946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.565 [2024-12-11 13:50:26.435988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.565 [2024-12-11 13:50:26.452595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.565 [2024-12-11 13:50:26.452632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.565 [2024-12-11 13:50:26.469359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.565 [2024-12-11 13:50:26.469576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.565 [2024-12-11 13:50:26.485893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.565 [2024-12-11 13:50:26.485933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.565 [2024-12-11 13:50:26.502939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.565 [2024-12-11 13:50:26.503153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.565 [2024-12-11 13:50:26.519733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.565 [2024-12-11 13:50:26.519778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.565 [2024-12-11 13:50:26.537306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.565 [2024-12-11 13:50:26.537341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.565 [2024-12-11 13:50:26.553540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.565 [2024-12-11 13:50:26.553582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.565 [2024-12-11 13:50:26.570383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.565 [2024-12-11 13:50:26.570657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.565 [2024-12-11 13:50:26.586965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.565 [2024-12-11 13:50:26.587002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.565 [2024-12-11 13:50:26.604163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.565 [2024-12-11 13:50:26.604197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.823 [2024-12-11 13:50:26.620604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.823 [2024-12-11 13:50:26.620639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.823 [2024-12-11 13:50:26.639583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.823 [2024-12-11 13:50:26.639620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.823 [2024-12-11 13:50:26.654379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.823 [2024-12-11 13:50:26.654414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.823 [2024-12-11 13:50:26.663687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.823 [2024-12-11 13:50:26.663766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.823 [2024-12-11 13:50:26.679049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.823 [2024-12-11 13:50:26.679084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.823 [2024-12-11 13:50:26.695762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.823 [2024-12-11 13:50:26.695797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.823 [2024-12-11 13:50:26.712284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.823 [2024-12-11 13:50:26.712320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.823 [2024-12-11 13:50:26.728647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.823 [2024-12-11 13:50:26.728683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.823 [2024-12-11 13:50:26.745269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.823 [2024-12-11 13:50:26.745471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.823 [2024-12-11 13:50:26.761551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.823 [2024-12-11 13:50:26.761599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.823 [2024-12-11 13:50:26.778678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.823 [2024-12-11 13:50:26.778760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.823 [2024-12-11 13:50:26.794303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.823 [2024-12-11 13:50:26.794337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.823 [2024-12-11 13:50:26.809805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.823 [2024-12-11 13:50:26.809838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.823 [2024-12-11 13:50:26.827676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.823 [2024-12-11 13:50:26.827907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.823 [2024-12-11 13:50:26.842842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.823 [2024-12-11 13:50:26.842878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.823 [2024-12-11 13:50:26.852809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.823 [2024-12-11 13:50:26.852848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.823 [2024-12-11 13:50:26.867943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.823 [2024-12-11 13:50:26.867985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.082 [2024-12-11 13:50:26.885038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.082 [2024-12-11 13:50:26.885078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.082 11991.00 IOPS, 93.68 MiB/s [2024-12-11T13:50:27.129Z] [2024-12-11 13:50:26.901026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.082 [2024-12-11 13:50:26.901074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.082 [2024-12-11 13:50:26.918424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.082 [2024-12-11 13:50:26.918626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.082 [2024-12-11 13:50:26.935441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.082 [2024-12-11 13:50:26.935487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.082 [2024-12-11 13:50:26.952748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.082 [2024-12-11 13:50:26.952802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.082 [2024-12-11 13:50:26.968663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.082 [2024-12-11 13:50:26.968996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.082 [2024-12-11 13:50:26.985883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.082 [2024-12-11 13:50:26.985918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.082 [2024-12-11 13:50:27.002672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.082 [2024-12-11 13:50:27.002754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.082 [2024-12-11 13:50:27.019541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.082 [2024-12-11 13:50:27.019576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.082 [2024-12-11 13:50:27.035308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.082 [2024-12-11 13:50:27.035342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.082 [2024-12-11 13:50:27.053673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.082 [2024-12-11 13:50:27.053736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.082 [2024-12-11 13:50:27.068525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.082 [2024-12-11 13:50:27.068756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.082 [2024-12-11 13:50:27.086325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.082 [2024-12-11 13:50:27.086360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.082 [2024-12-11 13:50:27.101669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.082 [2024-12-11 13:50:27.101753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.082 [2024-12-11 13:50:27.112540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.082 [2024-12-11 13:50:27.112756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.340 [2024-12-11 13:50:27.129111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.340 [2024-12-11 13:50:27.129161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.340 [2024-12-11 13:50:27.145942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.340 [2024-12-11 13:50:27.146006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.340 [2024-12-11 13:50:27.163229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.340 [2024-12-11 13:50:27.163279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.340 [2024-12-11 13:50:27.177125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.340 [2024-12-11 13:50:27.177162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.340 [2024-12-11 13:50:27.193383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.340 [2024-12-11 13:50:27.193418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.340 [2024-12-11 13:50:27.208826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.340 [2024-12-11 13:50:27.208860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.340 [2024-12-11 13:50:27.220164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.340 [2024-12-11 13:50:27.220199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.340 [2024-12-11 13:50:27.236054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.340 [2024-12-11 13:50:27.236088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.340 [2024-12-11 13:50:27.252322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.340 [2024-12-11 13:50:27.252356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.340 [2024-12-11 13:50:27.269904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.340 [2024-12-11 13:50:27.269945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.340 [2024-12-11 13:50:27.284902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.340 [2024-12-11 13:50:27.284936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.340 [2024-12-11 13:50:27.301043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.340 [2024-12-11 13:50:27.301078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.340 [2024-12-11 13:50:27.316521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.340 [2024-12-11 13:50:27.316556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.340 [2024-12-11 13:50:27.334986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.340 [2024-12-11 13:50:27.335160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.340 [2024-12-11 13:50:27.349972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.340 [2024-12-11 13:50:27.350124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.340 [2024-12-11 13:50:27.366409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.340 [2024-12-11 13:50:27.366443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.340 [2024-12-11 13:50:27.383174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.340 [2024-12-11 13:50:27.383224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.599 [2024-12-11 13:50:27.400971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.599 [2024-12-11 13:50:27.401152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.599 [2024-12-11 13:50:27.415846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.599 [2024-12-11 13:50:27.415882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.599 [2024-12-11 13:50:27.425578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.599 [2024-12-11 13:50:27.425617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.599 [2024-12-11 13:50:27.442027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.599 [2024-12-11 13:50:27.442063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.599 [2024-12-11 13:50:27.452040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.599 [2024-12-11 13:50:27.452091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.599 [2024-12-11 13:50:27.466737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.599 [2024-12-11 13:50:27.466804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.599 [2024-12-11 13:50:27.476336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.599 [2024-12-11 13:50:27.476373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.599 [2024-12-11 13:50:27.491690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.599 [2024-12-11 13:50:27.491916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.599 [2024-12-11 13:50:27.507557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.599 [2024-12-11 13:50:27.507750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.599 [2024-12-11 13:50:27.524706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.599 [2024-12-11 13:50:27.524785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.599 [2024-12-11 13:50:27.541054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.599 [2024-12-11 13:50:27.541105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.599 [2024-12-11 13:50:27.560004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.599 [2024-12-11 13:50:27.560039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.599 [2024-12-11 13:50:27.574594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.599 [2024-12-11 13:50:27.574789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.599 [2024-12-11 13:50:27.584557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.599 [2024-12-11 13:50:27.584592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.599 [2024-12-11 13:50:27.599950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.599 [2024-12-11 13:50:27.599984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.599 [2024-12-11 13:50:27.618017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.599 [2024-12-11 13:50:27.618173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.599 [2024-12-11 13:50:27.632188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.599 [2024-12-11 13:50:27.632225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.857 [2024-12-11 13:50:27.647986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.857 [2024-12-11 13:50:27.648038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.857 [2024-12-11 13:50:27.665318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.857 [2024-12-11 13:50:27.665649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.857 [2024-12-11 13:50:27.679441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.857 [2024-12-11 13:50:27.679497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.857 [2024-12-11 13:50:27.695350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.857 [2024-12-11 13:50:27.695385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.857 [2024-12-11 13:50:27.713443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.857 [2024-12-11 13:50:27.713477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.857 [2024-12-11 13:50:27.727813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.857 [2024-12-11 13:50:27.727847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.857 [2024-12-11 13:50:27.743540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.857 [2024-12-11 13:50:27.743746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.857 [2024-12-11 13:50:27.760742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.857 [2024-12-11 13:50:27.760776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.857 [2024-12-11 13:50:27.775802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.857 [2024-12-11 13:50:27.775835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.857 [2024-12-11 13:50:27.785663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.857 [2024-12-11 13:50:27.785724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.857 [2024-12-11 13:50:27.800386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.858 [2024-12-11 13:50:27.800453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.858 [2024-12-11 13:50:27.809443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.858 [2024-12-11 13:50:27.809478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.858 [2024-12-11 13:50:27.824934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.858 [2024-12-11 13:50:27.824969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.858 [2024-12-11 13:50:27.841249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.858 [2024-12-11 13:50:27.841286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.858 [2024-12-11 13:50:27.857006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.858 [2024-12-11 13:50:27.857042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.858 [2024-12-11 13:50:27.876209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.858 [2024-12-11 13:50:27.876247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.858 [2024-12-11 13:50:27.891596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.858 12157.00 IOPS, 94.98 MiB/s [2024-12-11T13:50:27.905Z] [2024-12-11 13:50:27.891812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.116 [2024-12-11 13:50:27.908368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.116 [2024-12-11 13:50:27.908422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.116 [2024-12-11 13:50:27.925764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.116 [2024-12-11 13:50:27.925813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.116 [2024-12-11 13:50:27.941353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.116 [2024-12-11 13:50:27.941389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.116 [2024-12-11 13:50:27.960324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.116 [2024-12-11 13:50:27.960359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.116 [2024-12-11 13:50:27.974893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.116 [2024-12-11 13:50:27.975077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.116 [2024-12-11 13:50:27.985134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.116 [2024-12-11 13:50:27.985172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.116 [2024-12-11 13:50:27.999541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.116 [2024-12-11 13:50:27.999594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.116 [2024-12-11 13:50:28.009481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.116 [2024-12-11 13:50:28.009529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.116 [2024-12-11 13:50:28.025738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.116 [2024-12-11 13:50:28.025814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.116 [2024-12-11 13:50:28.042571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.116 [2024-12-11 13:50:28.042797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.116 [2024-12-11 13:50:28.058914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.116 [2024-12-11 13:50:28.058949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.116 [2024-12-11 13:50:28.076248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.116 [2024-12-11 13:50:28.076283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.116 [2024-12-11 13:50:28.092447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.116 [2024-12-11 13:50:28.092482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.116 [2024-12-11 13:50:28.110204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.116 [2024-12-11 13:50:28.110410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.116 [2024-12-11 13:50:28.125540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.116 [2024-12-11 13:50:28.125738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.116 [2024-12-11 13:50:28.134796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.116 [2024-12-11 13:50:28.134829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.116 [2024-12-11 13:50:28.149758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.116 [2024-12-11 13:50:28.149979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.375 [2024-12-11 13:50:28.165619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.375 [2024-12-11 13:50:28.165846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.375 [2024-12-11 13:50:28.181233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.375 [2024-12-11 13:50:28.181627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.375 [2024-12-11 13:50:28.198648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.375 [2024-12-11 13:50:28.198729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.375 [2024-12-11 13:50:28.215780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.375 [2024-12-11 13:50:28.215991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.375 [2024-12-11 13:50:28.231931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.375 [2024-12-11 13:50:28.231966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.375 [2024-12-11 13:50:28.248843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.375 [2024-12-11 13:50:28.248876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.375 [2024-12-11 13:50:28.266730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.375 [2024-12-11 13:50:28.266798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.375 [2024-12-11 13:50:28.281002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.375 [2024-12-11 13:50:28.281037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.375 [2024-12-11 13:50:28.297278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.375 [2024-12-11 13:50:28.297313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.375 [2024-12-11 13:50:28.313036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.375 [2024-12-11 13:50:28.313086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.375 [2024-12-11 13:50:28.330639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.375 [2024-12-11 13:50:28.330673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.375 [2024-12-11 13:50:28.346943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.375 [2024-12-11 13:50:28.347171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.375 [2024-12-11 13:50:28.363387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.375 [2024-12-11 13:50:28.363422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.375 [2024-12-11 13:50:28.380848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.375 [2024-12-11 13:50:28.380882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.375 [2024-12-11 13:50:28.395644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.375 [2024-12-11 13:50:28.395678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.375 [2024-12-11 13:50:28.411785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.375 [2024-12-11 13:50:28.411835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.633 [2024-12-11 13:50:28.428182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.633 [2024-12-11 13:50:28.428234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.633 [2024-12-11 13:50:28.443632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.633 [2024-12-11 13:50:28.443880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.633 [2024-12-11 13:50:28.453653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.633 [2024-12-11 13:50:28.453688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.633 [2024-12-11 13:50:28.469087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.633 [2024-12-11 13:50:28.469137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.633 [2024-12-11 13:50:28.485830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.633 [2024-12-11 13:50:28.485866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.633 [2024-12-11 13:50:28.504156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.633 [2024-12-11 13:50:28.504190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.633 [2024-12-11 13:50:28.518726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.633 [2024-12-11 13:50:28.518809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.633 [2024-12-11 13:50:28.533693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.633 [2024-12-11 13:50:28.533776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.633 [2024-12-11 13:50:28.544605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.633 [2024-12-11 13:50:28.544639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.633 [2024-12-11 13:50:28.560592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.633 [2024-12-11 13:50:28.560625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.633 [2024-12-11 13:50:28.578029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.633 [2024-12-11 13:50:28.578069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.633 [2024-12-11 13:50:28.593556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.633 [2024-12-11 13:50:28.593590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.633 [2024-12-11 13:50:28.602769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.633 [2024-12-11 13:50:28.602805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.633 [2024-12-11 13:50:28.618977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.633 [2024-12-11 13:50:28.619242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.633 [2024-12-11 13:50:28.629313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.633 [2024-12-11 13:50:28.629355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.633 [2024-12-11 13:50:28.644103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.633 [2024-12-11 13:50:28.644142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.633 [2024-12-11 13:50:28.653450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.633 [2024-12-11 13:50:28.653486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.633 [2024-12-11 13:50:28.669174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.633 [2024-12-11 13:50:28.669211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.892 [2024-12-11 13:50:28.685145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.892 [2024-12-11 13:50:28.685180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.892 [2024-12-11 13:50:28.702591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.892 [2024-12-11 13:50:28.702780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.892 [2024-12-11 13:50:28.719693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.892 [2024-12-11 13:50:28.719771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.892 [2024-12-11 13:50:28.735765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.892 [2024-12-11 13:50:28.735801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.892 [2024-12-11 13:50:28.751751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.892 [2024-12-11 13:50:28.751786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.892 [2024-12-11 13:50:28.763389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.892 [2024-12-11 13:50:28.763423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.892 [2024-12-11 13:50:28.779900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.892 [2024-12-11 13:50:28.779935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.892 [2024-12-11 13:50:28.795390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.892 [2024-12-11 13:50:28.795424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.892 [2024-12-11 13:50:28.813802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.892 [2024-12-11 13:50:28.813862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.892 [2024-12-11 13:50:28.828198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.892 [2024-12-11 13:50:28.828232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.892 [2024-12-11 13:50:28.844665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.892 [2024-12-11 13:50:28.844728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.892 [2024-12-11 13:50:28.860587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.892 [2024-12-11 13:50:28.860624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.892 [2024-12-11 13:50:28.876967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.892 [2024-12-11 13:50:28.877007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.892 [2024-12-11 13:50:28.893041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.892 [2024-12-11 13:50:28.893079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.892 12260.00 IOPS, 95.78 MiB/s [2024-12-11T13:50:28.939Z] [2024-12-11 13:50:28.912542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.892 [2024-12-11 13:50:28.912586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.892 [2024-12-11 13:50:28.927648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.892 [2024-12-11 13:50:28.927689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.151 [2024-12-11 13:50:28.943940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.151 [2024-12-11 13:50:28.943974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.151 [2024-12-11 13:50:28.961217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.151 [2024-12-11 13:50:28.961256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.151 [2024-12-11 13:50:28.975155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.151 [2024-12-11 13:50:28.975193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.151 [2024-12-11 13:50:28.991516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.151 [2024-12-11 13:50:28.991553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.151 [2024-12-11 13:50:29.007430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.151 [2024-12-11 13:50:29.007463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.151 [2024-12-11 13:50:29.026720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.151 [2024-12-11 13:50:29.027076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.151 [2024-12-11 13:50:29.041104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.151 [2024-12-11 13:50:29.041165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.151 [2024-12-11 13:50:29.057139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.151 [2024-12-11 13:50:29.057178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.151 [2024-12-11 13:50:29.073862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.151 [2024-12-11 13:50:29.073897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.151 [2024-12-11 13:50:29.090608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.151 [2024-12-11 13:50:29.090645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.151 [2024-12-11 13:50:29.107611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.151 [2024-12-11 13:50:29.107648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.151 [2024-12-11 13:50:29.124400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.151 [2024-12-11 13:50:29.124436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.151 [2024-12-11 13:50:29.141021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.151 [2024-12-11 13:50:29.141343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.151 [2024-12-11 13:50:29.157588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.151 [2024-12-11 13:50:29.157642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.151 [2024-12-11 13:50:29.173952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.151 [2024-12-11 13:50:29.173988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.151 [2024-12-11 13:50:29.189691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.151 [2024-12-11 13:50:29.189775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.410 [2024-12-11 13:50:29.205125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.410 [2024-12-11 13:50:29.205174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.410 [2024-12-11 13:50:29.223380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.410 [2024-12-11 13:50:29.223556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.410 [2024-12-11 13:50:29.238691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.410 [2024-12-11 13:50:29.238891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.410 [2024-12-11 13:50:29.254832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.410 [2024-12-11 13:50:29.255003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.410 [2024-12-11 13:50:29.272397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.410 [2024-12-11 13:50:29.272559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.410 [2024-12-11 13:50:29.287701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.410 [2024-12-11 13:50:29.287927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.410 [2024-12-11 13:50:29.303656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.410 [2024-12-11 13:50:29.303853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.410 [2024-12-11 13:50:29.321602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.410 [2024-12-11 13:50:29.321787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.410 [2024-12-11 13:50:29.337147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.410 [2024-12-11 13:50:29.337315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.410 [2024-12-11 13:50:29.353017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.410 [2024-12-11 13:50:29.353206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.410 [2024-12-11 13:50:29.370544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.410 [2024-12-11 13:50:29.370763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.410 [2024-12-11 13:50:29.385301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.410 [2024-12-11 13:50:29.385474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.411 [2024-12-11 13:50:29.401889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.411 [2024-12-11 13:50:29.402083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.411 [2024-12-11 13:50:29.417023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.411 [2024-12-11 13:50:29.417184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.411 [2024-12-11 13:50:29.426504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.411 [2024-12-11 13:50:29.426679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.411 [2024-12-11 13:50:29.443107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.411 [2024-12-11 13:50:29.443267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.670 [2024-12-11 13:50:29.462236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.670 [2024-12-11 13:50:29.462416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.670 [2024-12-11 13:50:29.477903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.670 [2024-12-11 13:50:29.478084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.670 [2024-12-11 13:50:29.495581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.670 [2024-12-11 13:50:29.495780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.670 [2024-12-11 13:50:29.510005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.670 [2024-12-11 13:50:29.510179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.670 [2024-12-11 13:50:29.527219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.670 [2024-12-11 13:50:29.527255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.670 [2024-12-11 13:50:29.542394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.670 [2024-12-11 13:50:29.542428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.670 [2024-12-11 13:50:29.558488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.670 [2024-12-11 13:50:29.558527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.670 [2024-12-11 13:50:29.575978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.670 [2024-12-11 13:50:29.576013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.670 [2024-12-11 13:50:29.591720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.670 [2024-12-11 13:50:29.591780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.670 [2024-12-11 13:50:29.609304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.670 [2024-12-11 13:50:29.609522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.670 [2024-12-11 13:50:29.623585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.670 [2024-12-11 13:50:29.623621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.670 [2024-12-11 13:50:29.639688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.670 [2024-12-11 13:50:29.639769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.670 [2024-12-11 13:50:29.656435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.670 [2024-12-11 13:50:29.656612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.670 [2024-12-11 13:50:29.673721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.670 [2024-12-11 13:50:29.673778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.670 [2024-12-11 13:50:29.690270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.670 [2024-12-11 13:50:29.690306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.670 [2024-12-11 13:50:29.707055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.670 [2024-12-11 13:50:29.707090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.929 [2024-12-11 13:50:29.723091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.929 [2024-12-11 13:50:29.723127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.929 [2024-12-11 13:50:29.740420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.929 [2024-12-11 13:50:29.740463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.929 [2024-12-11 13:50:29.756763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.929 [2024-12-11 13:50:29.756802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.929 [2024-12-11 13:50:29.773599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.929 [2024-12-11 13:50:29.773637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.929 [2024-12-11 13:50:29.790293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.929 [2024-12-11 13:50:29.790492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.929 [2024-12-11 13:50:29.806476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.929 [2024-12-11 13:50:29.806511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.929 [2024-12-11 13:50:29.825512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.929 [2024-12-11 13:50:29.825728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.929 [2024-12-11 13:50:29.839684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.929 [2024-12-11 13:50:29.839765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.929 [2024-12-11 13:50:29.854796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.929 [2024-12-11 13:50:29.854830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.929 [2024-12-11 13:50:29.871433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.929 [2024-12-11 13:50:29.871467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.929 [2024-12-11 13:50:29.887599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.929 [2024-12-11 13:50:29.887633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.929 12273.50 IOPS, 95.89 MiB/s [2024-12-11T13:50:29.976Z] [2024-12-11 13:50:29.904545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.929 [2024-12-11 13:50:29.904582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.929 [2024-12-11 13:50:29.922484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.929 [2024-12-11 13:50:29.922520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.929 [2024-12-11 13:50:29.937608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.929 [2024-12-11 13:50:29.937820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.929 [2024-12-11 13:50:29.947174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.929 [2024-12-11 13:50:29.947212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.929 [2024-12-11 13:50:29.962864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.929 [2024-12-11 13:50:29.962898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.188 [2024-12-11 13:50:29.978609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.188 [2024-12-11 13:50:29.978645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.188 [2024-12-11 13:50:29.987909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.188 [2024-12-11 13:50:29.987943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.188 [2024-12-11 13:50:30.003834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.188 [2024-12-11 13:50:30.003873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.188 [2024-12-11 13:50:30.014158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.188 [2024-12-11 13:50:30.014201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.188 [2024-12-11 13:50:30.028592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.188 [2024-12-11 13:50:30.028630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.188 [2024-12-11 13:50:30.045325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.188 [2024-12-11 13:50:30.045523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.188 [2024-12-11 13:50:30.061933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.188 [2024-12-11 13:50:30.061971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.188 [2024-12-11 13:50:30.079819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.188 [2024-12-11 13:50:30.079857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.188 [2024-12-11 13:50:30.095113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.188 [2024-12-11 13:50:30.095315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.188 [2024-12-11 13:50:30.112214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.188 [2024-12-11 13:50:30.112267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.188 [2024-12-11 13:50:30.129087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.188 [2024-12-11 13:50:30.129134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.188 [2024-12-11 13:50:30.144412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.188 [2024-12-11 13:50:30.144449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.188 [2024-12-11 13:50:30.159857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.188 [2024-12-11 13:50:30.159893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.188 [2024-12-11 13:50:30.169256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.188 [2024-12-11 13:50:30.169307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.188 [2024-12-11 13:50:30.184909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.188 [2024-12-11 13:50:30.184955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.188 [2024-12-11 13:50:30.201721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.188 [2024-12-11 13:50:30.201781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.188 [2024-12-11 13:50:30.218882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.188 [2024-12-11 13:50:30.218922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.446 [2024-12-11 13:50:30.234746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.446 [2024-12-11 13:50:30.234791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.446 [2024-12-11 13:50:30.253680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.446 [2024-12-11 13:50:30.253938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.447 [2024-12-11 13:50:30.267556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.447 [2024-12-11 13:50:30.267591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.447 [2024-12-11 13:50:30.284321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.447 [2024-12-11 13:50:30.284356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.447 [2024-12-11 13:50:30.299662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.447 [2024-12-11 13:50:30.299728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.447 [2024-12-11 13:50:30.309108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.447 [2024-12-11 13:50:30.309320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.447 [2024-12-11 13:50:30.323386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.447 [2024-12-11 13:50:30.323424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.447 [2024-12-11 13:50:30.339530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.447 [2024-12-11 13:50:30.339565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.447 [2024-12-11 13:50:30.356174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.447 [2024-12-11 13:50:30.356370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.447 [2024-12-11 13:50:30.372507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.447 [2024-12-11 13:50:30.372555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.447 [2024-12-11 13:50:30.390158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.447 [2024-12-11 13:50:30.390212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.447 [2024-12-11 13:50:30.406426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.447 [2024-12-11 13:50:30.406621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.447 [2024-12-11 13:50:30.423036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.447 [2024-12-11 13:50:30.423070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.447 [2024-12-11 13:50:30.440488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.447 [2024-12-11 13:50:30.440524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.447 [2024-12-11 13:50:30.456043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.447 [2024-12-11 13:50:30.456095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.447 [2024-12-11 13:50:30.473340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.447 [2024-12-11 13:50:30.473543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.447 [2024-12-11 13:50:30.489347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.447 [2024-12-11 13:50:30.489384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 [2024-12-11 13:50:30.499236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-11 13:50:30.499271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 [2024-12-11 13:50:30.514526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-11 13:50:30.514563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 [2024-12-11 13:50:30.530439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-11 13:50:30.530476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 [2024-12-11 13:50:30.547599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-11 13:50:30.547648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 [2024-12-11 13:50:30.564456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-11 13:50:30.564797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 [2024-12-11 13:50:30.579861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-11 13:50:30.579899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 [2024-12-11 13:50:30.589545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-11 13:50:30.589580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 [2024-12-11 13:50:30.604060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-11 13:50:30.604097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 [2024-12-11 13:50:30.619541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-11 13:50:30.619762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 [2024-12-11 13:50:30.636001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-11 13:50:30.636038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 [2024-12-11 13:50:30.653238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-11 13:50:30.653274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 [2024-12-11 13:50:30.669685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-11 13:50:30.669787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 [2024-12-11 13:50:30.687905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-11 13:50:30.687939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 [2024-12-11 13:50:30.701977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-11 13:50:30.702152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 [2024-12-11 13:50:30.717613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-11 13:50:30.717813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 [2024-12-11 13:50:30.726931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-11 13:50:30.726968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 [2024-12-11 13:50:30.743723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-11 13:50:30.743770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.963 [2024-12-11 13:50:30.760125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.963 [2024-12-11 13:50:30.760174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.963 [2024-12-11 13:50:30.777594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.963 [2024-12-11 13:50:30.777809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.963 [2024-12-11 13:50:30.793133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.963 [2024-12-11 13:50:30.793169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.963 [2024-12-11 13:50:30.802220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.963 [2024-12-11 13:50:30.802271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.963 [2024-12-11 13:50:30.818164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.963 [2024-12-11 13:50:30.818247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.963 [2024-12-11 13:50:30.834654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.963 [2024-12-11 13:50:30.834689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.963 [2024-12-11 13:50:30.850523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.963 [2024-12-11 13:50:30.850557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.963 [2024-12-11 13:50:30.868064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.963 [2024-12-11 13:50:30.868130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.963 [2024-12-11 13:50:30.883523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.963 [2024-12-11 13:50:30.883743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.963 12249.40 IOPS, 95.70 MiB/s [2024-12-11T13:50:31.010Z] [2024-12-11 13:50:30.893169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.963 [2024-12-11 13:50:30.893205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.963 00:09:37.963 Latency(us) 00:09:37.963 [2024-12-11T13:50:31.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.963 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:37.963 Nvme1n1 : 5.01 12249.61 95.70 0.00 0.00 10436.18 4289.63 21328.99 00:09:37.963 [2024-12-11T13:50:31.010Z] =================================================================================================================== 00:09:37.963 [2024-12-11T13:50:31.010Z] Total : 12249.61 95.70 0.00 0.00 10436.18 4289.63 21328.99 00:09:37.963 [2024-12-11 13:50:30.904790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.963 [2024-12-11 13:50:30.904820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.963 [2024-12-11 13:50:30.916754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.963 [2024-12-11 13:50:30.916788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.963 [2024-12-11 13:50:30.928781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.963 [2024-12-11 13:50:30.928853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.963 [2024-12-11 13:50:30.940778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.963 [2024-12-11 13:50:30.940818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.963 [2024-12-11 13:50:30.952798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.963 [2024-12-11 13:50:30.952841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.963 [2024-12-11 13:50:30.964799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.963 [2024-12-11 13:50:30.964842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.963 [2024-12-11 13:50:30.976798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.963 [2024-12-11 13:50:30.976839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.963 [2024-12-11 13:50:30.988806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.963 [2024-12-11 13:50:30.988845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.963 [2024-12-11 13:50:31.000800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.963 [2024-12-11 13:50:31.000841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-12-11 13:50:31.012811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-12-11 13:50:31.012856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-12-11 13:50:31.024806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-12-11 13:50:31.024865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-12-11 13:50:31.036808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-12-11 13:50:31.036865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-12-11 13:50:31.048809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-12-11 13:50:31.049066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-12-11 13:50:31.060808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-12-11 13:50:31.060842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-12-11 13:50:31.072826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-12-11 13:50:31.072897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-12-11 13:50:31.084825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-12-11 13:50:31.084877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-12-11 13:50:31.096804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-12-11 13:50:31.096830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-12-11 13:50:31.108804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-12-11 13:50:31.108832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (66706) - No such process 00:09:38.222 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 66706 00:09:38.222 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.222 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.222 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:38.222 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.222 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:38.222 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.222 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:38.222 delay0 00:09:38.222 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.222 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:38.222 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.222 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:38.222 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.222 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:09:38.480 [2024-12-11 13:50:31.299619] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:45.062 Initializing NVMe Controllers 00:09:45.062 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:45.062 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:45.062 Initialization complete. Launching workers. 00:09:45.062 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 74 00:09:45.062 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 361, failed to submit 33 00:09:45.062 success 246, unsuccessful 115, failed 0 00:09:45.062 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:45.062 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:45.062 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:45.062 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:45.062 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:45.062 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:45.062 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.062 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:45.062 rmmod nvme_tcp 00:09:45.062 rmmod nvme_fabrics 00:09:45.062 rmmod nvme_keyring 00:09:45.062 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.062 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:45.062 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:45.062 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 66558 ']' 00:09:45.062 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 66558 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 66558 ']' 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 66558 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66558 00:09:45.063 killing process with pid 66558 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66558' 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 66558 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 66558 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:45.063 00:09:45.063 real 0m24.361s 00:09:45.063 user 0m39.992s 00:09:45.063 sys 0m6.684s 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.063 ************************************ 00:09:45.063 END TEST nvmf_zcopy 00:09:45.063 ************************************ 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:45.063 ************************************ 00:09:45.063 START TEST nvmf_nmic 00:09:45.063 ************************************ 00:09:45.063 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:45.063 * Looking for test storage... 00:09:45.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:45.063 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:45.063 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:45.063 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:45.322 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:45.322 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.322 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.322 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.322 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.322 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.322 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.322 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.322 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.322 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.322 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.322 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.322 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:45.322 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:45.322 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.322 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:45.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.323 --rc genhtml_branch_coverage=1 00:09:45.323 --rc genhtml_function_coverage=1 00:09:45.323 --rc genhtml_legend=1 00:09:45.323 --rc geninfo_all_blocks=1 00:09:45.323 --rc geninfo_unexecuted_blocks=1 00:09:45.323 00:09:45.323 ' 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:45.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.323 --rc genhtml_branch_coverage=1 00:09:45.323 --rc genhtml_function_coverage=1 00:09:45.323 --rc genhtml_legend=1 00:09:45.323 --rc geninfo_all_blocks=1 00:09:45.323 --rc geninfo_unexecuted_blocks=1 00:09:45.323 00:09:45.323 ' 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:45.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.323 --rc genhtml_branch_coverage=1 00:09:45.323 --rc genhtml_function_coverage=1 00:09:45.323 --rc genhtml_legend=1 00:09:45.323 --rc geninfo_all_blocks=1 00:09:45.323 --rc geninfo_unexecuted_blocks=1 00:09:45.323 00:09:45.323 ' 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:45.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.323 --rc genhtml_branch_coverage=1 00:09:45.323 --rc genhtml_function_coverage=1 00:09:45.323 --rc genhtml_legend=1 00:09:45.323 --rc geninfo_all_blocks=1 00:09:45.323 --rc geninfo_unexecuted_blocks=1 00:09:45.323 00:09:45.323 ' 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.323 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:45.323 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:45.324 Cannot find device "nvmf_init_br" 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:45.324 Cannot find device "nvmf_init_br2" 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:45.324 Cannot find device "nvmf_tgt_br" 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:45.324 Cannot find device "nvmf_tgt_br2" 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:45.324 Cannot find device "nvmf_init_br" 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:45.324 Cannot find device "nvmf_init_br2" 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:45.324 Cannot find device "nvmf_tgt_br" 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:45.324 Cannot find device "nvmf_tgt_br2" 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:45.324 Cannot find device "nvmf_br" 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:45.324 Cannot find device "nvmf_init_if" 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:45.324 Cannot find device "nvmf_init_if2" 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:45.324 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:45.324 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:45.324 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:45.583 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:45.583 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:09:45.583 00:09:45.583 --- 10.0.0.3 ping statistics --- 00:09:45.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.583 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:45.583 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:45.583 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.095 ms 00:09:45.583 00:09:45.583 --- 10.0.0.4 ping statistics --- 00:09:45.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.583 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:45.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:45.583 00:09:45.583 --- 10.0.0.1 ping statistics --- 00:09:45.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.583 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:45.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:09:45.583 00:09:45.583 --- 10.0.0.2 ping statistics --- 00:09:45.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.583 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=67075 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 67075 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 67075 ']' 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.583 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:45.842 [2024-12-11 13:50:38.652840] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:09:45.842 [2024-12-11 13:50:38.652938] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.842 [2024-12-11 13:50:38.809257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:45.842 [2024-12-11 13:50:38.876982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.842 [2024-12-11 13:50:38.877064] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.842 [2024-12-11 13:50:38.877078] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.842 [2024-12-11 13:50:38.877089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.842 [2024-12-11 13:50:38.877098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.842 [2024-12-11 13:50:38.878458] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.842 [2024-12-11 13:50:38.878662] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.842 [2024-12-11 13:50:38.878669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.842 [2024-12-11 13:50:38.878518] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.100 [2024-12-11 13:50:38.938640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:46.100 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.101 [2024-12-11 13:50:39.061499] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.101 Malloc0 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.101 [2024-12-11 13:50:39.126414] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.101 test case1: single bdev can't be used in multiple subsystems 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.101 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.360 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.360 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:46.360 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:46.360 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.360 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.360 [2024-12-11 13:50:39.154192] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:46.360 [2024-12-11 13:50:39.154237] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:46.360 [2024-12-11 13:50:39.154250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.360 request: 00:09:46.360 { 00:09:46.360 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:46.360 "namespace": { 00:09:46.360 "bdev_name": "Malloc0", 00:09:46.360 "no_auto_visible": false, 00:09:46.360 "hide_metadata": false 00:09:46.360 }, 00:09:46.360 "method": "nvmf_subsystem_add_ns", 00:09:46.360 "req_id": 1 00:09:46.360 } 00:09:46.360 Got JSON-RPC error response 00:09:46.360 response: 00:09:46.360 { 00:09:46.360 "code": -32602, 00:09:46.360 "message": "Invalid parameters" 00:09:46.360 } 00:09:46.360 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:46.360 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:46.360 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:46.360 Adding namespace failed - expected result. 00:09:46.360 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:46.360 test case2: host connect to nvmf target in multiple paths 00:09:46.360 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:46.360 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:46.360 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.360 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.360 [2024-12-11 13:50:39.166351] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:46.360 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.360 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:46.360 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:46.619 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:46.619 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:46.619 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:46.619 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:46.619 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:48.521 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:48.521 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:48.521 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:48.521 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:48.521 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:48.521 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:48.521 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:48.521 [global] 00:09:48.521 thread=1 00:09:48.521 invalidate=1 00:09:48.521 rw=write 00:09:48.521 time_based=1 00:09:48.521 runtime=1 00:09:48.521 ioengine=libaio 00:09:48.521 direct=1 00:09:48.521 bs=4096 00:09:48.521 iodepth=1 00:09:48.521 norandommap=0 00:09:48.521 numjobs=1 00:09:48.521 00:09:48.521 verify_dump=1 00:09:48.521 verify_backlog=512 00:09:48.521 verify_state_save=0 00:09:48.521 do_verify=1 00:09:48.521 verify=crc32c-intel 00:09:48.521 [job0] 00:09:48.521 filename=/dev/nvme0n1 00:09:48.521 Could not set queue depth (nvme0n1) 00:09:48.779 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.780 fio-3.35 00:09:48.780 Starting 1 thread 00:09:49.714 00:09:49.714 job0: (groupid=0, jobs=1): err= 0: pid=67158: Wed Dec 11 13:50:42 2024 00:09:49.714 read: IOPS=3058, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1001msec) 00:09:49.714 slat (nsec): min=11192, max=45693, avg=13261.85, stdev=3424.01 00:09:49.714 clat (usec): min=137, max=1466, avg=180.97, stdev=32.21 00:09:49.714 lat (usec): min=149, max=1478, avg=194.23, stdev=32.45 00:09:49.714 clat percentiles (usec): 00:09:49.714 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 167], 00:09:49.714 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:09:49.714 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 208], 00:09:49.714 | 99.00th=[ 231], 99.50th=[ 243], 99.90th=[ 400], 99.95th=[ 832], 00:09:49.714 | 99.99th=[ 1467] 00:09:49.714 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:49.714 slat (usec): min=16, max=125, avg=19.86, stdev= 5.37 00:09:49.714 clat (usec): min=84, max=294, avg=108.77, stdev=13.14 00:09:49.714 lat (usec): min=101, max=419, avg=128.64, stdev=15.39 00:09:49.714 clat percentiles (usec): 00:09:49.715 | 1.00th=[ 89], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 99], 00:09:49.715 | 30.00th=[ 102], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 110], 00:09:49.715 | 70.00th=[ 113], 80.00th=[ 116], 90.00th=[ 125], 95.00th=[ 133], 00:09:49.715 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 217], 99.95th=[ 235], 00:09:49.715 | 99.99th=[ 293] 00:09:49.715 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:49.715 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:49.715 lat (usec) : 100=11.56%, 250=88.21%, 500=0.20%, 1000=0.02% 00:09:49.715 lat (msec) : 2=0.02% 00:09:49.715 cpu : usr=2.00%, sys=8.20%, ctx=6134, majf=0, minf=5 00:09:49.715 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.715 issued rwts: total=3062,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.715 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.715 00:09:49.715 Run status group 0 (all jobs): 00:09:49.715 READ: bw=11.9MiB/s (12.5MB/s), 11.9MiB/s-11.9MiB/s (12.5MB/s-12.5MB/s), io=12.0MiB (12.5MB), run=1001-1001msec 00:09:49.715 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:49.715 00:09:49.715 Disk stats (read/write): 00:09:49.715 nvme0n1: ios=2609/3053, merge=0/0, ticks=493/364, in_queue=857, util=91.37% 00:09:49.715 13:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:49.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:49.973 13:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:49.973 13:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:49.973 13:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:49.973 13:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.973 13:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:49.973 13:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.973 13:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:49.973 13:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:49.973 13:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:49.973 13:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:49.973 13:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:49.973 13:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:49.973 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:49.973 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:49.973 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:49.973 rmmod nvme_tcp 00:09:49.973 rmmod nvme_fabrics 00:09:50.231 rmmod nvme_keyring 00:09:50.231 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:50.231 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:50.231 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:50.231 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 67075 ']' 00:09:50.231 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 67075 00:09:50.231 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 67075 ']' 00:09:50.231 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 67075 00:09:50.231 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:50.231 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.231 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67075 00:09:50.231 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.231 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.231 killing process with pid 67075 00:09:50.231 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67075' 00:09:50.231 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 67075 00:09:50.231 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 67075 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:50.490 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:50.748 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:50.748 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.748 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.748 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.748 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:50.748 00:09:50.748 real 0m5.599s 00:09:50.748 user 0m16.471s 00:09:50.748 sys 0m2.317s 00:09:50.748 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.748 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:50.748 ************************************ 00:09:50.748 END TEST nvmf_nmic 00:09:50.748 ************************************ 00:09:50.748 13:50:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:50.748 13:50:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:50.748 13:50:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.748 13:50:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:50.748 ************************************ 00:09:50.748 START TEST nvmf_fio_target 00:09:50.748 ************************************ 00:09:50.748 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:50.748 * Looking for test storage... 00:09:50.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:50.749 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:50.749 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:50.749 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:51.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.008 --rc genhtml_branch_coverage=1 00:09:51.008 --rc genhtml_function_coverage=1 00:09:51.008 --rc genhtml_legend=1 00:09:51.008 --rc geninfo_all_blocks=1 00:09:51.008 --rc geninfo_unexecuted_blocks=1 00:09:51.008 00:09:51.008 ' 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:51.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.008 --rc genhtml_branch_coverage=1 00:09:51.008 --rc genhtml_function_coverage=1 00:09:51.008 --rc genhtml_legend=1 00:09:51.008 --rc geninfo_all_blocks=1 00:09:51.008 --rc geninfo_unexecuted_blocks=1 00:09:51.008 00:09:51.008 ' 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:51.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.008 --rc genhtml_branch_coverage=1 00:09:51.008 --rc genhtml_function_coverage=1 00:09:51.008 --rc genhtml_legend=1 00:09:51.008 --rc geninfo_all_blocks=1 00:09:51.008 --rc geninfo_unexecuted_blocks=1 00:09:51.008 00:09:51.008 ' 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:51.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.008 --rc genhtml_branch_coverage=1 00:09:51.008 --rc genhtml_function_coverage=1 00:09:51.008 --rc genhtml_legend=1 00:09:51.008 --rc geninfo_all_blocks=1 00:09:51.008 --rc geninfo_unexecuted_blocks=1 00:09:51.008 00:09:51.008 ' 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:51.008 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:51.009 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:51.009 Cannot find device "nvmf_init_br" 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:51.009 Cannot find device "nvmf_init_br2" 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:51.009 Cannot find device "nvmf_tgt_br" 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:51.009 Cannot find device "nvmf_tgt_br2" 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:51.009 Cannot find device "nvmf_init_br" 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:51.009 Cannot find device "nvmf_init_br2" 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:51.009 Cannot find device "nvmf_tgt_br" 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:51.009 Cannot find device "nvmf_tgt_br2" 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:51.009 Cannot find device "nvmf_br" 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:51.009 Cannot find device "nvmf_init_if" 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:51.009 Cannot find device "nvmf_init_if2" 00:09:51.009 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:51.010 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:51.010 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:51.010 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:51.010 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:51.010 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:51.010 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:51.010 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:51.010 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:51.010 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:51.010 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:51.010 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:51.010 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:51.010 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:51.010 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:51.010 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:51.269 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:51.269 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:09:51.269 00:09:51.269 --- 10.0.0.3 ping statistics --- 00:09:51.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.269 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:51.269 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:51.269 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:09:51.269 00:09:51.269 --- 10.0.0.4 ping statistics --- 00:09:51.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.269 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:51.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:51.269 00:09:51.269 --- 10.0.0.1 ping statistics --- 00:09:51.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.269 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:51.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:09:51.269 00:09:51.269 --- 10.0.0.2 ping statistics --- 00:09:51.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.269 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=67393 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 67393 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 67393 ']' 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.269 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.270 [2024-12-11 13:50:44.292161] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:09:51.270 [2024-12-11 13:50:44.292267] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.528 [2024-12-11 13:50:44.446812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.528 [2024-12-11 13:50:44.508324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.528 [2024-12-11 13:50:44.508419] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.528 [2024-12-11 13:50:44.508434] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.528 [2024-12-11 13:50:44.508444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.528 [2024-12-11 13:50:44.508453] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.528 [2024-12-11 13:50:44.509802] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.528 [2024-12-11 13:50:44.509884] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.528 [2024-12-11 13:50:44.510015] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.528 [2024-12-11 13:50:44.510021] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.528 [2024-12-11 13:50:44.569632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:51.786 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.786 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:51.786 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:51.786 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:51.786 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.786 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.786 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:52.044 [2024-12-11 13:50:44.911757] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.044 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:52.302 13:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:52.302 13:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:52.560 13:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:52.560 13:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:52.818 13:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:52.818 13:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:53.076 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:53.076 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:53.668 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:53.926 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:53.926 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.184 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:54.184 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.442 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:54.442 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:54.700 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:54.959 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:54.959 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:55.217 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:55.217 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:55.475 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:55.733 [2024-12-11 13:50:48.523682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:55.733 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:55.991 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:56.249 13:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:56.249 13:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:56.249 13:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:56.249 13:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:56.249 13:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:56.249 13:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:56.249 13:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:58.778 13:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:58.778 13:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:58.779 13:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:58.779 13:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:58.779 13:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:58.779 13:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:58.779 13:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:58.779 [global] 00:09:58.779 thread=1 00:09:58.779 invalidate=1 00:09:58.779 rw=write 00:09:58.779 time_based=1 00:09:58.779 runtime=1 00:09:58.779 ioengine=libaio 00:09:58.779 direct=1 00:09:58.779 bs=4096 00:09:58.779 iodepth=1 00:09:58.779 norandommap=0 00:09:58.779 numjobs=1 00:09:58.779 00:09:58.779 verify_dump=1 00:09:58.779 verify_backlog=512 00:09:58.779 verify_state_save=0 00:09:58.779 do_verify=1 00:09:58.779 verify=crc32c-intel 00:09:58.779 [job0] 00:09:58.779 filename=/dev/nvme0n1 00:09:58.779 [job1] 00:09:58.779 filename=/dev/nvme0n2 00:09:58.779 [job2] 00:09:58.779 filename=/dev/nvme0n3 00:09:58.779 [job3] 00:09:58.779 filename=/dev/nvme0n4 00:09:58.779 Could not set queue depth (nvme0n1) 00:09:58.779 Could not set queue depth (nvme0n2) 00:09:58.779 Could not set queue depth (nvme0n3) 00:09:58.779 Could not set queue depth (nvme0n4) 00:09:58.779 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.779 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.779 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.779 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.779 fio-3.35 00:09:58.779 Starting 4 threads 00:09:59.714 00:09:59.714 job0: (groupid=0, jobs=1): err= 0: pid=67574: Wed Dec 11 13:50:52 2024 00:09:59.714 read: IOPS=2730, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1001msec) 00:09:59.714 slat (nsec): min=11306, max=55970, avg=14432.16, stdev=3597.43 00:09:59.714 clat (usec): min=144, max=620, avg=176.71, stdev=15.53 00:09:59.714 lat (usec): min=157, max=647, avg=191.14, stdev=16.16 00:09:59.714 clat percentiles (usec): 00:09:59.714 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:09:59.714 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 178], 00:09:59.714 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 202], 00:09:59.714 | 99.00th=[ 215], 99.50th=[ 219], 99.90th=[ 237], 99.95th=[ 241], 00:09:59.714 | 99.99th=[ 619] 00:09:59.714 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:59.714 slat (usec): min=13, max=121, avg=19.98, stdev= 4.73 00:09:59.714 clat (usec): min=95, max=1760, avg=132.38, stdev=34.82 00:09:59.714 lat (usec): min=112, max=1779, avg=152.35, stdev=35.30 00:09:59.714 clat percentiles (usec): 00:09:59.714 | 1.00th=[ 111], 5.00th=[ 116], 10.00th=[ 119], 20.00th=[ 122], 00:09:59.714 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 133], 00:09:59.714 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 155], 00:09:59.714 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 453], 99.95th=[ 652], 00:09:59.714 | 99.99th=[ 1762] 00:09:59.714 bw ( KiB/s): min=12288, max=12288, per=25.46%, avg=12288.00, stdev= 0.00, samples=1 00:09:59.714 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:59.714 lat (usec) : 100=0.03%, 250=99.83%, 500=0.09%, 750=0.03% 00:09:59.714 lat (msec) : 2=0.02% 00:09:59.714 cpu : usr=3.50%, sys=6.60%, ctx=5805, majf=0, minf=7 00:09:59.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.714 issued rwts: total=2733,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.714 job1: (groupid=0, jobs=1): err= 0: pid=67575: Wed Dec 11 13:50:52 2024 00:09:59.714 read: IOPS=2866, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec) 00:09:59.714 slat (nsec): min=11118, max=40490, avg=13623.81, stdev=2466.75 00:09:59.714 clat (usec): min=139, max=261, avg=174.49, stdev=19.79 00:09:59.714 lat (usec): min=151, max=283, avg=188.11, stdev=20.05 00:09:59.714 clat percentiles (usec): 00:09:59.714 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:09:59.714 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 00:09:59.714 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 202], 95.00th=[ 219], 00:09:59.714 | 99.00th=[ 243], 99.50th=[ 249], 99.90th=[ 260], 99.95th=[ 262], 00:09:59.714 | 99.99th=[ 262] 00:09:59.714 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:59.714 slat (usec): min=13, max=109, avg=19.96, stdev= 4.72 00:09:59.714 clat (usec): min=95, max=259, avg=126.74, stdev=12.70 00:09:59.714 lat (usec): min=112, max=368, avg=146.70, stdev=14.47 00:09:59.714 clat percentiles (usec): 00:09:59.714 | 1.00th=[ 102], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 117], 00:09:59.714 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 129], 00:09:59.714 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 149], 00:09:59.714 | 99.00th=[ 167], 99.50th=[ 172], 99.90th=[ 202], 99.95th=[ 215], 00:09:59.714 | 99.99th=[ 260] 00:09:59.714 bw ( KiB/s): min=12288, max=12288, per=25.46%, avg=12288.00, stdev= 0.00, samples=1 00:09:59.714 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:59.714 lat (usec) : 100=0.19%, 250=99.56%, 500=0.25% 00:09:59.714 cpu : usr=2.00%, sys=8.20%, ctx=5941, majf=0, minf=7 00:09:59.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.714 issued rwts: total=2869,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.714 job2: (groupid=0, jobs=1): err= 0: pid=67576: Wed Dec 11 13:50:52 2024 00:09:59.714 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:59.714 slat (nsec): min=11094, max=60656, avg=16020.05, stdev=5823.80 00:09:59.714 clat (usec): min=153, max=2535, avg=187.27, stdev=51.40 00:09:59.714 lat (usec): min=167, max=2552, avg=203.29, stdev=52.55 00:09:59.714 clat percentiles (usec): 00:09:59.714 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 174], 00:09:59.714 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:09:59.714 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 208], 95.00th=[ 215], 00:09:59.714 | 99.00th=[ 231], 99.50th=[ 241], 99.90th=[ 553], 99.95th=[ 840], 00:09:59.714 | 99.99th=[ 2540] 00:09:59.714 write: IOPS=2858, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec); 0 zone resets 00:09:59.714 slat (usec): min=13, max=164, avg=23.00, stdev= 7.33 00:09:59.714 clat (usec): min=108, max=1983, avg=141.12, stdev=40.33 00:09:59.714 lat (usec): min=125, max=2004, avg=164.12, stdev=41.54 00:09:59.714 clat percentiles (usec): 00:09:59.714 | 1.00th=[ 116], 5.00th=[ 122], 10.00th=[ 125], 20.00th=[ 129], 00:09:59.714 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:09:59.714 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 165], 00:09:59.714 | 99.00th=[ 188], 99.50th=[ 235], 99.90th=[ 478], 99.95th=[ 766], 00:09:59.714 | 99.99th=[ 1991] 00:09:59.714 bw ( KiB/s): min=12288, max=12288, per=25.46%, avg=12288.00, stdev= 0.00, samples=1 00:09:59.714 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:59.714 lat (usec) : 250=99.59%, 500=0.31%, 750=0.02%, 1000=0.04% 00:09:59.714 lat (msec) : 2=0.02%, 4=0.02% 00:09:59.714 cpu : usr=1.90%, sys=8.80%, ctx=5424, majf=0, minf=9 00:09:59.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.714 issued rwts: total=2560,2861,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.714 job3: (groupid=0, jobs=1): err= 0: pid=67577: Wed Dec 11 13:50:52 2024 00:09:59.714 read: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:59.714 slat (nsec): min=10946, max=38990, avg=13521.36, stdev=2746.73 00:09:59.714 clat (usec): min=151, max=651, avg=182.95, stdev=16.74 00:09:59.714 lat (usec): min=163, max=663, avg=196.47, stdev=17.12 00:09:59.714 clat percentiles (usec): 00:09:59.714 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:09:59.714 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:09:59.714 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 208], 00:09:59.714 | 99.00th=[ 225], 99.50th=[ 231], 99.90th=[ 251], 99.95th=[ 265], 00:09:59.714 | 99.99th=[ 652] 00:09:59.714 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:59.714 slat (usec): min=13, max=101, avg=19.69, stdev= 4.38 00:09:59.714 clat (usec): min=104, max=1584, avg=139.06, stdev=29.87 00:09:59.714 lat (usec): min=122, max=1601, avg=158.75, stdev=30.45 00:09:59.714 clat percentiles (usec): 00:09:59.714 | 1.00th=[ 115], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 128], 00:09:59.714 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:09:59.714 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 163], 00:09:59.714 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 265], 99.95th=[ 424], 00:09:59.714 | 99.99th=[ 1582] 00:09:59.714 bw ( KiB/s): min=12288, max=12288, per=25.46%, avg=12288.00, stdev= 0.00, samples=1 00:09:59.714 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:59.714 lat (usec) : 250=99.88%, 500=0.09%, 750=0.02% 00:09:59.714 lat (msec) : 2=0.02% 00:09:59.714 cpu : usr=1.70%, sys=7.90%, ctx=5635, majf=0, minf=16 00:09:59.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.714 issued rwts: total=2563,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.714 00:09:59.714 Run status group 0 (all jobs): 00:09:59.714 READ: bw=41.9MiB/s (43.9MB/s), 9.99MiB/s-11.2MiB/s (10.5MB/s-11.7MB/s), io=41.9MiB (43.9MB), run=1001-1001msec 00:09:59.714 WRITE: bw=47.1MiB/s (49.4MB/s), 11.2MiB/s-12.0MiB/s (11.7MB/s-12.6MB/s), io=47.2MiB (49.5MB), run=1001-1001msec 00:09:59.714 00:09:59.714 Disk stats (read/write): 00:09:59.714 nvme0n1: ios=2429/2560, merge=0/0, ticks=467/355, in_queue=822, util=87.68% 00:09:59.714 nvme0n2: ios=2593/2560, merge=0/0, ticks=473/353, in_queue=826, util=88.53% 00:09:59.714 nvme0n3: ios=2163/2560, merge=0/0, ticks=407/380, in_queue=787, util=89.17% 00:09:59.714 nvme0n4: ios=2232/2560, merge=0/0, ticks=413/380, in_queue=793, util=89.63% 00:09:59.714 13:50:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:59.714 [global] 00:09:59.714 thread=1 00:09:59.715 invalidate=1 00:09:59.715 rw=randwrite 00:09:59.715 time_based=1 00:09:59.715 runtime=1 00:09:59.715 ioengine=libaio 00:09:59.715 direct=1 00:09:59.715 bs=4096 00:09:59.715 iodepth=1 00:09:59.715 norandommap=0 00:09:59.715 numjobs=1 00:09:59.715 00:09:59.715 verify_dump=1 00:09:59.715 verify_backlog=512 00:09:59.715 verify_state_save=0 00:09:59.715 do_verify=1 00:09:59.715 verify=crc32c-intel 00:09:59.715 [job0] 00:09:59.715 filename=/dev/nvme0n1 00:09:59.715 [job1] 00:09:59.715 filename=/dev/nvme0n2 00:09:59.715 [job2] 00:09:59.715 filename=/dev/nvme0n3 00:09:59.715 [job3] 00:09:59.715 filename=/dev/nvme0n4 00:09:59.715 Could not set queue depth (nvme0n1) 00:09:59.715 Could not set queue depth (nvme0n2) 00:09:59.715 Could not set queue depth (nvme0n3) 00:09:59.715 Could not set queue depth (nvme0n4) 00:09:59.973 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.973 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.973 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.973 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.973 fio-3.35 00:09:59.973 Starting 4 threads 00:10:01.349 00:10:01.349 job0: (groupid=0, jobs=1): err= 0: pid=67631: Wed Dec 11 13:50:53 2024 00:10:01.349 read: IOPS=1956, BW=7824KiB/s (8012kB/s)(7832KiB/1001msec) 00:10:01.349 slat (nsec): min=11345, max=47780, avg=14363.21, stdev=3897.66 00:10:01.349 clat (usec): min=142, max=676, avg=280.99, stdev=62.32 00:10:01.349 lat (usec): min=154, max=704, avg=295.35, stdev=64.01 00:10:01.349 clat percentiles (usec): 00:10:01.349 | 1.00th=[ 190], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:10:01.349 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:10:01.349 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 351], 95.00th=[ 469], 00:10:01.349 | 99.00th=[ 510], 99.50th=[ 523], 99.90th=[ 570], 99.95th=[ 676], 00:10:01.349 | 99.99th=[ 676] 00:10:01.349 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:01.349 slat (nsec): min=16175, max=85848, avg=20568.96, stdev=4738.43 00:10:01.349 clat (usec): min=93, max=1845, avg=181.70, stdev=51.26 00:10:01.349 lat (usec): min=115, max=1866, avg=202.27, stdev=52.02 00:10:01.349 clat percentiles (usec): 00:10:01.349 | 1.00th=[ 101], 5.00th=[ 110], 10.00th=[ 117], 20.00th=[ 147], 00:10:01.349 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:10:01.349 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 223], 00:10:01.349 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 322], 99.95th=[ 429], 00:10:01.349 | 99.99th=[ 1844] 00:10:01.349 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:10:01.349 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:01.349 lat (usec) : 100=0.32%, 250=62.91%, 500=36.02%, 750=0.72% 00:10:01.349 lat (msec) : 2=0.02% 00:10:01.349 cpu : usr=2.00%, sys=5.40%, ctx=4007, majf=0, minf=7 00:10:01.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.349 issued rwts: total=1958,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.349 job1: (groupid=0, jobs=1): err= 0: pid=67632: Wed Dec 11 13:50:53 2024 00:10:01.349 read: IOPS=2005, BW=8024KiB/s (8217kB/s)(8032KiB/1001msec) 00:10:01.349 slat (nsec): min=11712, max=39511, avg=13036.52, stdev=2136.62 00:10:01.349 clat (usec): min=138, max=2901, avg=265.19, stdev=82.35 00:10:01.349 lat (usec): min=150, max=2929, avg=278.22, stdev=83.02 00:10:01.349 clat percentiles (usec): 00:10:01.349 | 1.00th=[ 149], 5.00th=[ 180], 10.00th=[ 239], 20.00th=[ 245], 00:10:01.349 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 265], 00:10:01.349 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 343], 00:10:01.349 | 99.00th=[ 400], 99.50th=[ 486], 99.90th=[ 1237], 99.95th=[ 1631], 00:10:01.349 | 99.99th=[ 2900] 00:10:01.349 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:01.349 slat (nsec): min=16542, max=59786, avg=19065.05, stdev=4187.80 00:10:01.349 clat (usec): min=102, max=852, avg=193.30, stdev=32.45 00:10:01.349 lat (usec): min=124, max=870, avg=212.37, stdev=34.37 00:10:01.349 clat percentiles (usec): 00:10:01.349 | 1.00th=[ 117], 5.00th=[ 133], 10.00th=[ 176], 20.00th=[ 184], 00:10:01.349 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 194], 00:10:01.349 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 219], 95.00th=[ 241], 00:10:01.349 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 474], 99.95th=[ 478], 00:10:01.349 | 99.99th=[ 857] 00:10:01.349 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:10:01.349 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:01.349 lat (usec) : 250=63.31%, 500=36.49%, 750=0.05%, 1000=0.07% 00:10:01.349 lat (msec) : 2=0.05%, 4=0.02% 00:10:01.349 cpu : usr=1.50%, sys=5.00%, ctx=4056, majf=0, minf=15 00:10:01.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.349 issued rwts: total=2008,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.349 job2: (groupid=0, jobs=1): err= 0: pid=67633: Wed Dec 11 13:50:53 2024 00:10:01.349 read: IOPS=1942, BW=7768KiB/s (7955kB/s)(7776KiB/1001msec) 00:10:01.349 slat (nsec): min=11925, max=41226, avg=14146.40, stdev=3174.27 00:10:01.349 clat (usec): min=155, max=6248, avg=272.75, stdev=198.58 00:10:01.349 lat (usec): min=168, max=6261, avg=286.90, stdev=198.78 00:10:01.349 clat percentiles (usec): 00:10:01.349 | 1.00th=[ 169], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 245], 00:10:01.349 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 265], 00:10:01.349 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 326], 00:10:01.349 | 99.00th=[ 367], 99.50th=[ 461], 99.90th=[ 4752], 99.95th=[ 6259], 00:10:01.349 | 99.99th=[ 6259] 00:10:01.349 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:01.349 slat (usec): min=16, max=193, avg=21.23, stdev= 8.01 00:10:01.349 clat (usec): min=101, max=545, avg=191.32, stdev=24.72 00:10:01.349 lat (usec): min=123, max=566, avg=212.55, stdev=28.24 00:10:01.349 clat percentiles (usec): 00:10:01.349 | 1.00th=[ 125], 5.00th=[ 139], 10.00th=[ 174], 20.00th=[ 182], 00:10:01.349 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:10:01.349 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 217], 95.00th=[ 233], 00:10:01.349 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 293], 99.95th=[ 343], 00:10:01.349 | 99.99th=[ 545] 00:10:01.349 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:10:01.349 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:01.349 lat (usec) : 250=64.85%, 500=34.94%, 750=0.08% 00:10:01.349 lat (msec) : 2=0.03%, 4=0.05%, 10=0.05% 00:10:01.349 cpu : usr=1.60%, sys=5.50%, ctx=3992, majf=0, minf=13 00:10:01.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.349 issued rwts: total=1944,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.349 job3: (groupid=0, jobs=1): err= 0: pid=67634: Wed Dec 11 13:50:53 2024 00:10:01.349 read: IOPS=1853, BW=7413KiB/s (7590kB/s)(7420KiB/1001msec) 00:10:01.349 slat (nsec): min=12297, max=57764, avg=14750.27, stdev=2964.91 00:10:01.349 clat (usec): min=195, max=2228, avg=272.13, stdev=61.03 00:10:01.349 lat (usec): min=208, max=2241, avg=286.88, stdev=61.40 00:10:01.349 clat percentiles (usec): 00:10:01.349 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:10:01.349 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 269], 00:10:01.349 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 322], 95.00th=[ 351], 00:10:01.349 | 99.00th=[ 445], 99.50th=[ 486], 99.90th=[ 799], 99.95th=[ 2212], 00:10:01.349 | 99.99th=[ 2212] 00:10:01.349 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:01.349 slat (nsec): min=18402, max=80939, avg=23276.57, stdev=5649.64 00:10:01.349 clat (usec): min=117, max=2082, avg=201.55, stdev=71.39 00:10:01.349 lat (usec): min=138, max=2120, avg=224.82, stdev=73.94 00:10:01.349 clat percentiles (usec): 00:10:01.349 | 1.00th=[ 123], 5.00th=[ 130], 10.00th=[ 137], 20.00th=[ 176], 00:10:01.349 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:10:01.349 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 293], 95.00th=[ 326], 00:10:01.349 | 99.00th=[ 375], 99.50th=[ 383], 99.90th=[ 832], 99.95th=[ 996], 00:10:01.349 | 99.99th=[ 2089] 00:10:01.349 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:10:01.349 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:01.349 lat (usec) : 250=58.37%, 500=41.35%, 750=0.15%, 1000=0.08% 00:10:01.349 lat (msec) : 4=0.05% 00:10:01.349 cpu : usr=1.90%, sys=5.60%, ctx=3911, majf=0, minf=9 00:10:01.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.349 issued rwts: total=1855,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.349 00:10:01.349 Run status group 0 (all jobs): 00:10:01.349 READ: bw=30.3MiB/s (31.8MB/s), 7413KiB/s-8024KiB/s (7590kB/s-8217kB/s), io=30.3MiB (31.8MB), run=1001-1001msec 00:10:01.349 WRITE: bw=32.0MiB/s (33.5MB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:10:01.349 00:10:01.349 Disk stats (read/write): 00:10:01.349 nvme0n1: ios=1596/2048, merge=0/0, ticks=464/391, in_queue=855, util=89.08% 00:10:01.349 nvme0n2: ios=1639/2048, merge=0/0, ticks=441/409, in_queue=850, util=89.86% 00:10:01.349 nvme0n3: ios=1536/1944, merge=0/0, ticks=423/377, in_queue=800, util=88.90% 00:10:01.350 nvme0n4: ios=1536/1824, merge=0/0, ticks=427/380, in_queue=807, util=89.67% 00:10:01.350 13:50:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:01.350 [global] 00:10:01.350 thread=1 00:10:01.350 invalidate=1 00:10:01.350 rw=write 00:10:01.350 time_based=1 00:10:01.350 runtime=1 00:10:01.350 ioengine=libaio 00:10:01.350 direct=1 00:10:01.350 bs=4096 00:10:01.350 iodepth=128 00:10:01.350 norandommap=0 00:10:01.350 numjobs=1 00:10:01.350 00:10:01.350 verify_dump=1 00:10:01.350 verify_backlog=512 00:10:01.350 verify_state_save=0 00:10:01.350 do_verify=1 00:10:01.350 verify=crc32c-intel 00:10:01.350 [job0] 00:10:01.350 filename=/dev/nvme0n1 00:10:01.350 [job1] 00:10:01.350 filename=/dev/nvme0n2 00:10:01.350 [job2] 00:10:01.350 filename=/dev/nvme0n3 00:10:01.350 [job3] 00:10:01.350 filename=/dev/nvme0n4 00:10:01.350 Could not set queue depth (nvme0n1) 00:10:01.350 Could not set queue depth (nvme0n2) 00:10:01.350 Could not set queue depth (nvme0n3) 00:10:01.350 Could not set queue depth (nvme0n4) 00:10:01.350 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.350 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.350 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.350 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.350 fio-3.35 00:10:01.350 Starting 4 threads 00:10:02.724 00:10:02.724 job0: (groupid=0, jobs=1): err= 0: pid=67693: Wed Dec 11 13:50:55 2024 00:10:02.724 read: IOPS=5238, BW=20.5MiB/s (21.5MB/s)(20.5MiB/1002msec) 00:10:02.724 slat (usec): min=4, max=3147, avg=90.10, stdev=427.66 00:10:02.724 clat (usec): min=318, max=13560, avg=11917.29, stdev=1039.14 00:10:02.724 lat (usec): min=2981, max=13580, avg=12007.39, stdev=948.39 00:10:02.724 clat percentiles (usec): 00:10:02.724 | 1.00th=[ 6390], 5.00th=[11207], 10.00th=[11469], 20.00th=[11731], 00:10:02.724 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:10:02.724 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12649], 95.00th=[12911], 00:10:02.724 | 99.00th=[13173], 99.50th=[13173], 99.90th=[13435], 99.95th=[13566], 00:10:02.724 | 99.99th=[13566] 00:10:02.724 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:10:02.724 slat (usec): min=10, max=3015, avg=86.74, stdev=370.87 00:10:02.724 clat (usec): min=8551, max=12477, avg=11355.73, stdev=487.09 00:10:02.724 lat (usec): min=9567, max=12665, avg=11442.47, stdev=316.69 00:10:02.724 clat percentiles (usec): 00:10:02.724 | 1.00th=[ 9110], 5.00th=[10814], 10.00th=[10945], 20.00th=[11076], 00:10:02.724 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11469], 60.00th=[11469], 00:10:02.724 | 70.00th=[11600], 80.00th=[11600], 90.00th=[11731], 95.00th=[11863], 00:10:02.724 | 99.00th=[12387], 99.50th=[12387], 99.90th=[12518], 99.95th=[12518], 00:10:02.724 | 99.99th=[12518] 00:10:02.724 bw ( KiB/s): min=22491, max=22491, per=34.99%, avg=22491.00, stdev= 0.00, samples=1 00:10:02.724 iops : min= 5622, max= 5622, avg=5622.00, stdev= 0.00, samples=1 00:10:02.724 lat (usec) : 500=0.01% 00:10:02.724 lat (msec) : 4=0.29%, 10=3.41%, 20=96.29% 00:10:02.724 cpu : usr=4.60%, sys=13.89%, ctx=369, majf=0, minf=5 00:10:02.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:02.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.724 issued rwts: total=5249,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.724 job1: (groupid=0, jobs=1): err= 0: pid=67694: Wed Dec 11 13:50:55 2024 00:10:02.724 read: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec) 00:10:02.724 slat (usec): min=6, max=9264, avg=200.67, stdev=924.66 00:10:02.724 clat (usec): min=13706, max=56256, avg=25765.55, stdev=9355.72 00:10:02.724 lat (usec): min=13723, max=57749, avg=25966.21, stdev=9443.73 00:10:02.724 clat percentiles (usec): 00:10:02.724 | 1.00th=[14615], 5.00th=[16319], 10.00th=[16450], 20.00th=[17171], 00:10:02.724 | 30.00th=[22414], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:10:02.724 | 70.00th=[25035], 80.00th=[27919], 90.00th=[42730], 95.00th=[48497], 00:10:02.724 | 99.00th=[53740], 99.50th=[54789], 99.90th=[56361], 99.95th=[56361], 00:10:02.724 | 99.99th=[56361] 00:10:02.724 write: IOPS=2256, BW=9026KiB/s (9243kB/s)(9044KiB/1002msec); 0 zone resets 00:10:02.724 slat (usec): min=12, max=14145, avg=251.67, stdev=1028.08 00:10:02.724 clat (usec): min=1086, max=78448, avg=32097.47, stdev=19696.40 00:10:02.724 lat (usec): min=4619, max=78471, avg=32349.14, stdev=19822.87 00:10:02.724 clat percentiles (usec): 00:10:02.724 | 1.00th=[ 5080], 5.00th=[12649], 10.00th=[12911], 20.00th=[13566], 00:10:02.724 | 30.00th=[16712], 40.00th=[21890], 50.00th=[26084], 60.00th=[29754], 00:10:02.724 | 70.00th=[43779], 80.00th=[49021], 90.00th=[62129], 95.00th=[76022], 00:10:02.724 | 99.00th=[78119], 99.50th=[78119], 99.90th=[78119], 99.95th=[78119], 00:10:02.724 | 99.99th=[78119] 00:10:02.724 bw ( KiB/s): min= 8175, max= 8175, per=12.72%, avg=8175.00, stdev= 0.00, samples=1 00:10:02.724 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:10:02.724 lat (msec) : 2=0.02%, 10=1.95%, 20=30.89%, 50=55.72%, 100=11.42% 00:10:02.724 cpu : usr=2.50%, sys=6.89%, ctx=253, majf=0, minf=16 00:10:02.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:10:02.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.724 issued rwts: total=2048,2261,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.724 job2: (groupid=0, jobs=1): err= 0: pid=67695: Wed Dec 11 13:50:55 2024 00:10:02.724 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:10:02.724 slat (usec): min=5, max=3251, avg=100.92, stdev=481.44 00:10:02.724 clat (usec): min=9952, max=14636, avg=13485.29, stdev=634.22 00:10:02.724 lat (usec): min=12044, max=14663, avg=13586.20, stdev=418.43 00:10:02.724 clat percentiles (usec): 00:10:02.724 | 1.00th=[10683], 5.00th=[12649], 10.00th=[12911], 20.00th=[13173], 00:10:02.724 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[13698], 00:10:02.724 | 70.00th=[13829], 80.00th=[13960], 90.00th=[14091], 95.00th=[14222], 00:10:02.724 | 99.00th=[14353], 99.50th=[14353], 99.90th=[14615], 99.95th=[14615], 00:10:02.724 | 99.99th=[14615] 00:10:02.724 write: IOPS=5078, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1002msec); 0 zone resets 00:10:02.724 slat (usec): min=7, max=3161, avg=98.08, stdev=421.37 00:10:02.724 clat (usec): min=314, max=13923, avg=12671.72, stdev=1130.43 00:10:02.724 lat (usec): min=2726, max=14251, avg=12769.79, stdev=1049.83 00:10:02.724 clat percentiles (usec): 00:10:02.725 | 1.00th=[ 6456], 5.00th=[11731], 10.00th=[12125], 20.00th=[12387], 00:10:02.725 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[12911], 00:10:02.725 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13435], 95.00th=[13566], 00:10:02.725 | 99.00th=[13829], 99.50th=[13829], 99.90th=[13960], 99.95th=[13960], 00:10:02.725 | 99.99th=[13960] 00:10:02.725 bw ( KiB/s): min=20439, max=20439, per=31.80%, avg=20439.00, stdev= 0.00, samples=1 00:10:02.725 iops : min= 5109, max= 5109, avg=5109.00, stdev= 0.00, samples=1 00:10:02.725 lat (usec) : 500=0.01% 00:10:02.725 lat (msec) : 4=0.33%, 10=0.82%, 20=98.83% 00:10:02.725 cpu : usr=3.70%, sys=14.09%, ctx=306, majf=0, minf=11 00:10:02.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:02.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.725 issued rwts: total=4608,5089,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.725 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.725 job3: (groupid=0, jobs=1): err= 0: pid=67696: Wed Dec 11 13:50:55 2024 00:10:02.725 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:10:02.725 slat (usec): min=6, max=8812, avg=164.58, stdev=786.98 00:10:02.725 clat (usec): min=10765, max=53793, avg=20729.95, stdev=5404.07 00:10:02.725 lat (usec): min=11090, max=53818, avg=20894.53, stdev=5458.53 00:10:02.725 clat percentiles (usec): 00:10:02.725 | 1.00th=[12780], 5.00th=[14615], 10.00th=[15795], 20.00th=[17171], 00:10:02.725 | 30.00th=[17171], 40.00th=[17433], 50.00th=[18482], 60.00th=[21627], 00:10:02.725 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25560], 95.00th=[29492], 00:10:02.725 | 99.00th=[39060], 99.50th=[45351], 99.90th=[50594], 99.95th=[50594], 00:10:02.725 | 99.99th=[53740] 00:10:02.725 write: IOPS=3151, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1005msec); 0 zone resets 00:10:02.725 slat (usec): min=12, max=8605, avg=148.12, stdev=773.60 00:10:02.725 clat (usec): min=4587, max=73296, avg=20023.92, stdev=11100.30 00:10:02.725 lat (usec): min=8167, max=73319, avg=20172.04, stdev=11180.97 00:10:02.725 clat percentiles (usec): 00:10:02.725 | 1.00th=[11863], 5.00th=[12780], 10.00th=[13304], 20.00th=[13829], 00:10:02.725 | 30.00th=[14353], 40.00th=[15401], 50.00th=[15926], 60.00th=[16909], 00:10:02.725 | 70.00th=[19006], 80.00th=[20579], 90.00th=[33424], 95.00th=[46400], 00:10:02.725 | 99.00th=[64226], 99.50th=[67634], 99.90th=[71828], 99.95th=[72877], 00:10:02.725 | 99.99th=[72877] 00:10:02.725 bw ( KiB/s): min= 8840, max=15704, per=19.09%, avg=12272.00, stdev=4853.58, samples=2 00:10:02.725 iops : min= 2210, max= 3926, avg=3068.00, stdev=1213.40, samples=2 00:10:02.725 lat (msec) : 10=0.21%, 20=67.30%, 50=30.21%, 100=2.28% 00:10:02.725 cpu : usr=3.19%, sys=8.37%, ctx=242, majf=0, minf=9 00:10:02.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:02.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.725 issued rwts: total=3072,3167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.725 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.725 00:10:02.725 Run status group 0 (all jobs): 00:10:02.725 READ: bw=58.2MiB/s (61.0MB/s), 8176KiB/s-20.5MiB/s (8372kB/s-21.5MB/s), io=58.5MiB (61.3MB), run=1002-1005msec 00:10:02.725 WRITE: bw=62.8MiB/s (65.8MB/s), 9026KiB/s-22.0MiB/s (9243kB/s-23.0MB/s), io=63.1MiB (66.1MB), run=1002-1005msec 00:10:02.725 00:10:02.725 Disk stats (read/write): 00:10:02.725 nvme0n1: ios=4658/4896, merge=0/0, ticks=12396/11978, in_queue=24374, util=89.47% 00:10:02.725 nvme0n2: ios=1585/1962, merge=0/0, ticks=12890/22226, in_queue=35116, util=89.29% 00:10:02.725 nvme0n3: ios=4113/4352, merge=0/0, ticks=12630/12025, in_queue=24655, util=89.96% 00:10:02.725 nvme0n4: ios=2560/3047, merge=0/0, ticks=24711/25579, in_queue=50290, util=89.51% 00:10:02.725 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:02.725 [global] 00:10:02.725 thread=1 00:10:02.725 invalidate=1 00:10:02.725 rw=randwrite 00:10:02.725 time_based=1 00:10:02.725 runtime=1 00:10:02.725 ioengine=libaio 00:10:02.725 direct=1 00:10:02.725 bs=4096 00:10:02.725 iodepth=128 00:10:02.725 norandommap=0 00:10:02.725 numjobs=1 00:10:02.725 00:10:02.725 verify_dump=1 00:10:02.725 verify_backlog=512 00:10:02.725 verify_state_save=0 00:10:02.725 do_verify=1 00:10:02.725 verify=crc32c-intel 00:10:02.725 [job0] 00:10:02.725 filename=/dev/nvme0n1 00:10:02.725 [job1] 00:10:02.725 filename=/dev/nvme0n2 00:10:02.725 [job2] 00:10:02.725 filename=/dev/nvme0n3 00:10:02.725 [job3] 00:10:02.725 filename=/dev/nvme0n4 00:10:02.725 Could not set queue depth (nvme0n1) 00:10:02.725 Could not set queue depth (nvme0n2) 00:10:02.725 Could not set queue depth (nvme0n3) 00:10:02.725 Could not set queue depth (nvme0n4) 00:10:02.725 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:02.725 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:02.725 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:02.725 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:02.725 fio-3.35 00:10:02.725 Starting 4 threads 00:10:04.097 00:10:04.097 job0: (groupid=0, jobs=1): err= 0: pid=67751: Wed Dec 11 13:50:56 2024 00:10:04.097 read: IOPS=5556, BW=21.7MiB/s (22.8MB/s)(21.8MiB/1004msec) 00:10:04.098 slat (usec): min=5, max=5726, avg=84.68, stdev=525.19 00:10:04.098 clat (usec): min=1646, max=18686, avg=11826.07, stdev=1412.81 00:10:04.098 lat (usec): min=5432, max=22483, avg=11910.76, stdev=1434.52 00:10:04.098 clat percentiles (usec): 00:10:04.098 | 1.00th=[ 6259], 5.00th=[ 8717], 10.00th=[11207], 20.00th=[11469], 00:10:04.098 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:10:04.098 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12649], 95.00th=[13042], 00:10:04.098 | 99.00th=[18220], 99.50th=[18482], 99.90th=[18744], 99.95th=[18744], 00:10:04.098 | 99.99th=[18744] 00:10:04.098 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:10:04.098 slat (usec): min=10, max=7297, avg=86.42, stdev=503.94 00:10:04.098 clat (usec): min=5866, max=14470, avg=10864.57, stdev=921.25 00:10:04.098 lat (usec): min=7828, max=14543, avg=10950.99, stdev=801.99 00:10:04.098 clat percentiles (usec): 00:10:04.098 | 1.00th=[ 7177], 5.00th=[ 9896], 10.00th=[10028], 20.00th=[10290], 00:10:04.098 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:10:04.098 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11600], 95.00th=[11731], 00:10:04.098 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14484], 99.95th=[14484], 00:10:04.098 | 99.99th=[14484] 00:10:04.098 bw ( KiB/s): min=21636, max=23376, per=35.41%, avg=22506.00, stdev=1230.37, samples=2 00:10:04.098 iops : min= 5409, max= 5844, avg=5626.50, stdev=307.59, samples=2 00:10:04.098 lat (msec) : 2=0.01%, 10=7.06%, 20=92.94% 00:10:04.098 cpu : usr=4.99%, sys=13.86%, ctx=240, majf=0, minf=1 00:10:04.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:04.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.098 issued rwts: total=5579,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.098 job1: (groupid=0, jobs=1): err= 0: pid=67752: Wed Dec 11 13:50:56 2024 00:10:04.098 read: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec) 00:10:04.098 slat (usec): min=6, max=29317, avg=243.24, stdev=1748.09 00:10:04.098 clat (usec): min=11086, max=74476, avg=32157.05, stdev=11630.95 00:10:04.098 lat (usec): min=13960, max=74512, avg=32400.30, stdev=11705.93 00:10:04.098 clat percentiles (usec): 00:10:04.098 | 1.00th=[15139], 5.00th=[17957], 10.00th=[23725], 20.00th=[24511], 00:10:04.098 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[28181], 00:10:04.098 | 70.00th=[38011], 80.00th=[47449], 90.00th=[48497], 95.00th=[49546], 00:10:04.098 | 99.00th=[67634], 99.50th=[67634], 99.90th=[68682], 99.95th=[71828], 00:10:04.098 | 99.99th=[74974] 00:10:04.098 write: IOPS=2410, BW=9643KiB/s (9874kB/s)(9720KiB/1008msec); 0 zone resets 00:10:04.098 slat (usec): min=6, max=20808, avg=202.05, stdev=1169.12 00:10:04.098 clat (usec): min=797, max=88562, avg=25487.98, stdev=19417.64 00:10:04.098 lat (usec): min=8148, max=88573, avg=25690.03, stdev=19521.66 00:10:04.098 clat percentiles (usec): 00:10:04.098 | 1.00th=[ 8455], 5.00th=[10290], 10.00th=[10945], 20.00th=[11994], 00:10:04.098 | 30.00th=[13566], 40.00th=[15401], 50.00th=[17957], 60.00th=[21103], 00:10:04.098 | 70.00th=[23462], 80.00th=[34341], 90.00th=[63177], 95.00th=[70779], 00:10:04.098 | 99.00th=[83362], 99.50th=[88605], 99.90th=[88605], 99.95th=[88605], 00:10:04.098 | 99.99th=[88605] 00:10:04.098 bw ( KiB/s): min= 8192, max=10203, per=14.47%, avg=9197.50, stdev=1421.99, samples=2 00:10:04.098 iops : min= 2048, max= 2550, avg=2299.00, stdev=354.97, samples=2 00:10:04.098 lat (usec) : 1000=0.02% 00:10:04.098 lat (msec) : 10=1.85%, 20=32.38%, 50=55.36%, 100=10.38% 00:10:04.098 cpu : usr=2.18%, sys=5.56%, ctx=146, majf=0, minf=9 00:10:04.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:10:04.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.098 issued rwts: total=2048,2430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.098 job2: (groupid=0, jobs=1): err= 0: pid=67753: Wed Dec 11 13:50:56 2024 00:10:04.098 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:10:04.098 slat (usec): min=5, max=3831, avg=102.30, stdev=489.97 00:10:04.098 clat (usec): min=10136, max=14905, avg=13599.97, stdev=585.63 00:10:04.098 lat (usec): min=12976, max=14939, avg=13702.28, stdev=334.08 00:10:04.098 clat percentiles (usec): 00:10:04.098 | 1.00th=[10814], 5.00th=[13042], 10.00th=[13304], 20.00th=[13435], 00:10:04.098 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13698], 60.00th=[13698], 00:10:04.098 | 70.00th=[13829], 80.00th=[13960], 90.00th=[14091], 95.00th=[14353], 00:10:04.098 | 99.00th=[14615], 99.50th=[14746], 99.90th=[14877], 99.95th=[14877], 00:10:04.098 | 99.99th=[14877] 00:10:04.098 write: IOPS=4882, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1003msec); 0 zone resets 00:10:04.098 slat (usec): min=10, max=3183, avg=100.76, stdev=440.22 00:10:04.098 clat (usec): min=273, max=14581, avg=13065.38, stdev=1182.92 00:10:04.098 lat (usec): min=2839, max=14863, avg=13166.14, stdev=1096.38 00:10:04.098 clat percentiles (usec): 00:10:04.098 | 1.00th=[ 6456], 5.00th=[11600], 10.00th=[12649], 20.00th=[12911], 00:10:04.098 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13173], 60.00th=[13304], 00:10:04.098 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13829], 95.00th=[14091], 00:10:04.098 | 99.00th=[14353], 99.50th=[14484], 99.90th=[14615], 99.95th=[14615], 00:10:04.098 | 99.99th=[14615] 00:10:04.098 bw ( KiB/s): min=17876, max=20240, per=29.99%, avg=19058.00, stdev=1671.60, samples=2 00:10:04.098 iops : min= 4469, max= 5060, avg=4764.50, stdev=417.90, samples=2 00:10:04.098 lat (usec) : 500=0.01% 00:10:04.098 lat (msec) : 4=0.34%, 10=0.65%, 20=99.00% 00:10:04.098 cpu : usr=3.59%, sys=13.67%, ctx=317, majf=0, minf=3 00:10:04.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:04.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.098 issued rwts: total=4608,4897,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.098 job3: (groupid=0, jobs=1): err= 0: pid=67754: Wed Dec 11 13:50:56 2024 00:10:04.098 read: IOPS=2992, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1009msec) 00:10:04.098 slat (usec): min=8, max=20283, avg=189.20, stdev=1392.57 00:10:04.098 clat (usec): min=1757, max=65878, avg=25830.78, stdev=8628.59 00:10:04.098 lat (usec): min=9133, max=65915, avg=26019.98, stdev=8717.50 00:10:04.098 clat percentiles (usec): 00:10:04.098 | 1.00th=[ 9765], 5.00th=[16712], 10.00th=[17957], 20.00th=[18220], 00:10:04.098 | 30.00th=[18744], 40.00th=[24249], 50.00th=[24773], 60.00th=[25035], 00:10:04.098 | 70.00th=[25822], 80.00th=[34341], 90.00th=[40633], 95.00th=[42730], 00:10:04.098 | 99.00th=[46400], 99.50th=[46400], 99.90th=[46400], 99.95th=[64750], 00:10:04.098 | 99.99th=[65799] 00:10:04.098 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:10:04.098 slat (usec): min=7, max=20962, avg=133.68, stdev=909.44 00:10:04.098 clat (usec): min=7291, max=35010, avg=16248.07, stdev=4618.57 00:10:04.098 lat (usec): min=9587, max=35036, avg=16381.75, stdev=4570.76 00:10:04.098 clat percentiles (usec): 00:10:04.098 | 1.00th=[ 9503], 5.00th=[12125], 10.00th=[12518], 20.00th=[12911], 00:10:04.098 | 30.00th=[13304], 40.00th=[13829], 50.00th=[14222], 60.00th=[14746], 00:10:04.098 | 70.00th=[17433], 80.00th=[21627], 90.00th=[22938], 95.00th=[23462], 00:10:04.098 | 99.00th=[34341], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:10:04.098 | 99.99th=[34866] 00:10:04.098 bw ( KiB/s): min=10738, max=13816, per=19.32%, avg=12277.00, stdev=2176.47, samples=2 00:10:04.098 iops : min= 2684, max= 3454, avg=3069.00, stdev=544.47, samples=2 00:10:04.098 lat (msec) : 2=0.02%, 10=1.41%, 20=55.16%, 50=43.38%, 100=0.03% 00:10:04.098 cpu : usr=2.78%, sys=8.53%, ctx=123, majf=0, minf=4 00:10:04.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:04.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.098 issued rwts: total=3019,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.098 00:10:04.098 Run status group 0 (all jobs): 00:10:04.098 READ: bw=59.1MiB/s (61.9MB/s), 8127KiB/s-21.7MiB/s (8322kB/s-22.8MB/s), io=59.6MiB (62.5MB), run=1003-1009msec 00:10:04.098 WRITE: bw=62.1MiB/s (65.1MB/s), 9643KiB/s-21.9MiB/s (9874kB/s-23.0MB/s), io=62.6MiB (65.7MB), run=1003-1009msec 00:10:04.098 00:10:04.098 Disk stats (read/write): 00:10:04.098 nvme0n1: ios=4657/4864, merge=0/0, ticks=51756/48655, in_queue=100411, util=87.49% 00:10:04.098 nvme0n2: ios=1574/2048, merge=0/0, ticks=47650/55067, in_queue=102717, util=88.01% 00:10:04.098 nvme0n3: ios=4000/4096, merge=0/0, ticks=12245/11572, in_queue=23817, util=89.02% 00:10:04.098 nvme0n4: ios=2552/2632, merge=0/0, ticks=63016/40285, in_queue=103301, util=89.67% 00:10:04.098 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:04.098 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=67767 00:10:04.098 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:04.098 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:04.098 [global] 00:10:04.098 thread=1 00:10:04.098 invalidate=1 00:10:04.098 rw=read 00:10:04.098 time_based=1 00:10:04.098 runtime=10 00:10:04.098 ioengine=libaio 00:10:04.098 direct=1 00:10:04.098 bs=4096 00:10:04.098 iodepth=1 00:10:04.098 norandommap=1 00:10:04.098 numjobs=1 00:10:04.098 00:10:04.098 [job0] 00:10:04.098 filename=/dev/nvme0n1 00:10:04.098 [job1] 00:10:04.098 filename=/dev/nvme0n2 00:10:04.098 [job2] 00:10:04.098 filename=/dev/nvme0n3 00:10:04.098 [job3] 00:10:04.098 filename=/dev/nvme0n4 00:10:04.098 Could not set queue depth (nvme0n1) 00:10:04.098 Could not set queue depth (nvme0n2) 00:10:04.098 Could not set queue depth (nvme0n3) 00:10:04.098 Could not set queue depth (nvme0n4) 00:10:04.098 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.098 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.098 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.098 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.099 fio-3.35 00:10:04.099 Starting 4 threads 00:10:07.408 13:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:07.409 fio: pid=67815, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:07.409 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=49737728, buflen=4096 00:10:07.409 13:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:07.409 fio: pid=67814, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:07.409 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=65966080, buflen=4096 00:10:07.409 13:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.409 13:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:07.666 fio: pid=67812, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:07.666 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=59662336, buflen=4096 00:10:07.666 13:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.666 13:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:07.924 fio: pid=67813, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:07.924 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=14512128, buflen=4096 00:10:07.924 00:10:07.924 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67812: Wed Dec 11 13:51:00 2024 00:10:07.924 read: IOPS=4228, BW=16.5MiB/s (17.3MB/s)(56.9MiB/3445msec) 00:10:07.924 slat (usec): min=7, max=8932, avg=13.91, stdev=141.71 00:10:07.924 clat (usec): min=132, max=3005, avg=221.38, stdev=55.78 00:10:07.924 lat (usec): min=143, max=9117, avg=235.29, stdev=152.54 00:10:07.924 clat percentiles (usec): 00:10:07.924 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 165], 00:10:07.924 | 30.00th=[ 178], 40.00th=[ 219], 50.00th=[ 239], 60.00th=[ 247], 00:10:07.924 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 277], 00:10:07.924 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 445], 99.95th=[ 603], 00:10:07.924 | 99.99th=[ 2409] 00:10:07.924 bw ( KiB/s): min=14658, max=21920, per=25.32%, avg=17047.00, stdev=3440.51, samples=6 00:10:07.924 iops : min= 3664, max= 5480, avg=4261.67, stdev=860.20, samples=6 00:10:07.924 lat (usec) : 250=63.38%, 500=36.53%, 750=0.05%, 1000=0.01% 00:10:07.924 lat (msec) : 2=0.01%, 4=0.01% 00:10:07.924 cpu : usr=1.19%, sys=4.65%, ctx=14586, majf=0, minf=1 00:10:07.924 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.924 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.924 issued rwts: total=14567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.924 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67813: Wed Dec 11 13:51:00 2024 00:10:07.924 read: IOPS=5345, BW=20.9MiB/s (21.9MB/s)(77.8MiB/3728msec) 00:10:07.924 slat (usec): min=7, max=12005, avg=15.05, stdev=151.49 00:10:07.924 clat (usec): min=3, max=2336, avg=170.67, stdev=40.08 00:10:07.924 lat (usec): min=143, max=12254, avg=185.72, stdev=157.55 00:10:07.924 clat percentiles (usec): 00:10:07.924 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:10:07.924 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:10:07.925 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 194], 95.00th=[ 225], 00:10:07.925 | 99.00th=[ 249], 99.50th=[ 269], 99.90th=[ 545], 99.95th=[ 799], 00:10:07.925 | 99.99th=[ 2245] 00:10:07.925 bw ( KiB/s): min=17662, max=22696, per=31.88%, avg=21460.86, stdev=1782.62, samples=7 00:10:07.925 iops : min= 4415, max= 5674, avg=5365.14, stdev=445.83, samples=7 00:10:07.925 lat (usec) : 4=0.02%, 250=99.00%, 500=0.86%, 750=0.06%, 1000=0.02% 00:10:07.925 lat (msec) : 2=0.03%, 4=0.01% 00:10:07.925 cpu : usr=1.56%, sys=6.06%, ctx=19948, majf=0, minf=2 00:10:07.925 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.925 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.925 issued rwts: total=19928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.925 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.925 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67814: Wed Dec 11 13:51:00 2024 00:10:07.925 read: IOPS=5028, BW=19.6MiB/s (20.6MB/s)(62.9MiB/3203msec) 00:10:07.925 slat (usec): min=10, max=8087, avg=14.38, stdev=88.57 00:10:07.925 clat (usec): min=130, max=3843, avg=183.10, stdev=37.50 00:10:07.925 lat (usec): min=162, max=8270, avg=197.49, stdev=96.37 00:10:07.925 clat percentiles (usec): 00:10:07.925 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:10:07.925 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:10:07.925 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 206], 00:10:07.925 | 99.00th=[ 221], 99.50th=[ 227], 99.90th=[ 433], 99.95th=[ 570], 00:10:07.925 | 99.99th=[ 1860] 00:10:07.925 bw ( KiB/s): min=19457, max=20440, per=30.00%, avg=20193.50, stdev=386.39, samples=6 00:10:07.925 iops : min= 4864, max= 5110, avg=5048.33, stdev=96.69, samples=6 00:10:07.925 lat (usec) : 250=99.78%, 500=0.15%, 750=0.04%, 1000=0.01% 00:10:07.925 lat (msec) : 2=0.01%, 4=0.01% 00:10:07.925 cpu : usr=1.47%, sys=6.15%, ctx=16110, majf=0, minf=2 00:10:07.925 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.925 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.925 issued rwts: total=16106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.925 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.925 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67815: Wed Dec 11 13:51:00 2024 00:10:07.925 read: IOPS=4113, BW=16.1MiB/s (16.8MB/s)(47.4MiB/2952msec) 00:10:07.925 slat (usec): min=7, max=635, avg=12.13, stdev= 6.98 00:10:07.925 clat (usec): min=3, max=6191, avg=229.59, stdev=80.33 00:10:07.925 lat (usec): min=158, max=6203, avg=241.73, stdev=79.38 00:10:07.925 clat percentiles (usec): 00:10:07.925 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 178], 00:10:07.925 | 30.00th=[ 188], 40.00th=[ 237], 50.00th=[ 247], 60.00th=[ 253], 00:10:07.925 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 277], 00:10:07.925 | 99.00th=[ 293], 99.50th=[ 297], 99.90th=[ 469], 99.95th=[ 1172], 00:10:07.925 | 99.99th=[ 3130] 00:10:07.925 bw ( KiB/s): min=14864, max=20448, per=24.96%, avg=16800.00, stdev=2669.68, samples=5 00:10:07.925 iops : min= 3716, max= 5112, avg=4200.00, stdev=667.42, samples=5 00:10:07.925 lat (usec) : 4=0.01%, 250=55.55%, 500=44.33%, 750=0.04% 00:10:07.925 lat (msec) : 2=0.03%, 4=0.02%, 10=0.01% 00:10:07.925 cpu : usr=1.19%, sys=4.68%, ctx=12149, majf=0, minf=2 00:10:07.925 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.925 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.925 issued rwts: total=12144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.925 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.925 00:10:07.925 Run status group 0 (all jobs): 00:10:07.925 READ: bw=65.7MiB/s (68.9MB/s), 16.1MiB/s-20.9MiB/s (16.8MB/s-21.9MB/s), io=245MiB (257MB), run=2952-3728msec 00:10:07.925 00:10:07.925 Disk stats (read/write): 00:10:07.925 nvme0n1: ios=14239/0, merge=0/0, ticks=3084/0, in_queue=3084, util=95.48% 00:10:07.925 nvme0n2: ios=19273/0, merge=0/0, ticks=3269/0, in_queue=3269, util=95.58% 00:10:07.925 nvme0n3: ios=15682/0, merge=0/0, ticks=2893/0, in_queue=2893, util=96.43% 00:10:07.925 nvme0n4: ios=11848/0, merge=0/0, ticks=2609/0, in_queue=2609, util=96.59% 00:10:07.925 13:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.925 13:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:08.182 13:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:08.182 13:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:08.745 13:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:08.745 13:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:09.002 13:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:09.002 13:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:09.259 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:09.259 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:09.517 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:09.517 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 67767 00:10:09.517 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:09.517 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:09.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.517 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:09.517 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:09.517 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:09.517 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.517 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:09.517 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.517 nvmf hotplug test: fio failed as expected 00:10:09.517 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:09.517 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:09.517 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:09.517 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.775 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:09.775 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:09.775 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:09.775 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:09.775 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:09.775 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:09.775 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:09.775 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:09.775 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:09.775 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.775 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:09.775 rmmod nvme_tcp 00:10:09.775 rmmod nvme_fabrics 00:10:09.775 rmmod nvme_keyring 00:10:09.775 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.775 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:09.775 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:09.775 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 67393 ']' 00:10:09.775 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 67393 00:10:09.775 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 67393 ']' 00:10:09.775 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 67393 00:10:09.775 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:10.033 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.033 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67393 00:10:10.033 killing process with pid 67393 00:10:10.033 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.033 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.033 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67393' 00:10:10.033 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 67393 00:10:10.033 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 67393 00:10:10.033 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:10.033 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:10.033 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:10.033 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:10.033 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:10.033 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:10.033 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:10.033 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:10.033 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:10.033 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:10.291 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:10.291 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:10.292 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:10.292 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:10.292 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:10.292 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:10.292 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:10.292 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:10.292 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:10.292 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:10.292 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:10.292 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:10.292 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:10.292 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.292 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.292 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.292 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:10:10.292 00:10:10.292 real 0m19.654s 00:10:10.292 user 1m13.339s 00:10:10.292 sys 0m10.284s 00:10:10.292 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.292 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.292 ************************************ 00:10:10.292 END TEST nvmf_fio_target 00:10:10.292 ************************************ 00:10:10.292 13:51:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:10.292 13:51:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:10.292 13:51:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.292 13:51:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.550 ************************************ 00:10:10.550 START TEST nvmf_bdevio 00:10:10.550 ************************************ 00:10:10.550 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:10.550 * Looking for test storage... 00:10:10.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:10.550 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:10.550 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:10.550 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:10.550 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:10.550 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.550 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.550 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.550 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.550 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.550 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.550 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.550 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.550 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:10.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.551 --rc genhtml_branch_coverage=1 00:10:10.551 --rc genhtml_function_coverage=1 00:10:10.551 --rc genhtml_legend=1 00:10:10.551 --rc geninfo_all_blocks=1 00:10:10.551 --rc geninfo_unexecuted_blocks=1 00:10:10.551 00:10:10.551 ' 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:10.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.551 --rc genhtml_branch_coverage=1 00:10:10.551 --rc genhtml_function_coverage=1 00:10:10.551 --rc genhtml_legend=1 00:10:10.551 --rc geninfo_all_blocks=1 00:10:10.551 --rc geninfo_unexecuted_blocks=1 00:10:10.551 00:10:10.551 ' 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:10.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.551 --rc genhtml_branch_coverage=1 00:10:10.551 --rc genhtml_function_coverage=1 00:10:10.551 --rc genhtml_legend=1 00:10:10.551 --rc geninfo_all_blocks=1 00:10:10.551 --rc geninfo_unexecuted_blocks=1 00:10:10.551 00:10:10.551 ' 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:10.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.551 --rc genhtml_branch_coverage=1 00:10:10.551 --rc genhtml_function_coverage=1 00:10:10.551 --rc genhtml_legend=1 00:10:10.551 --rc geninfo_all_blocks=1 00:10:10.551 --rc geninfo_unexecuted_blocks=1 00:10:10.551 00:10:10.551 ' 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.551 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.551 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:10.552 Cannot find device "nvmf_init_br" 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:10.552 Cannot find device "nvmf_init_br2" 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:10.552 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:10.814 Cannot find device "nvmf_tgt_br" 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:10.814 Cannot find device "nvmf_tgt_br2" 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:10.814 Cannot find device "nvmf_init_br" 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:10.814 Cannot find device "nvmf_init_br2" 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:10.814 Cannot find device "nvmf_tgt_br" 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:10.814 Cannot find device "nvmf_tgt_br2" 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:10.814 Cannot find device "nvmf_br" 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:10.814 Cannot find device "nvmf_init_if" 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:10.814 Cannot find device "nvmf_init_if2" 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:10.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:10.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:10.814 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:11.104 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:11.104 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:11.104 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:11.104 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:11.104 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:11.104 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:11.104 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:11.105 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:11.105 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:10:11.105 00:10:11.105 --- 10.0.0.3 ping statistics --- 00:10:11.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.105 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:11.105 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:11.105 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:10:11.105 00:10:11.105 --- 10.0.0.4 ping statistics --- 00:10:11.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.105 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:11.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:10:11.105 00:10:11.105 --- 10.0.0.1 ping statistics --- 00:10:11.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.105 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:11.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:10:11.105 00:10:11.105 --- 10.0.0.2 ping statistics --- 00:10:11.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.105 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=68136 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 68136 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 68136 ']' 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.105 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.105 [2024-12-11 13:51:04.028564] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:10:11.105 [2024-12-11 13:51:04.028634] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.363 [2024-12-11 13:51:04.178838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:11.363 [2024-12-11 13:51:04.244143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.363 [2024-12-11 13:51:04.244364] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.363 [2024-12-11 13:51:04.244520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.363 [2024-12-11 13:51:04.244758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.363 [2024-12-11 13:51:04.244904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.363 [2024-12-11 13:51:04.246289] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:10:11.363 [2024-12-11 13:51:04.246553] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:10:11.363 [2024-12-11 13:51:04.246433] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:10:11.363 [2024-12-11 13:51:04.246555] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.363 [2024-12-11 13:51:04.303948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:11.363 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.363 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:11.363 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:11.363 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:11.363 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.621 [2024-12-11 13:51:04.417269] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.621 Malloc0 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.621 [2024-12-11 13:51:04.496308] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:11.621 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:11.622 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:11.622 { 00:10:11.622 "params": { 00:10:11.622 "name": "Nvme$subsystem", 00:10:11.622 "trtype": "$TEST_TRANSPORT", 00:10:11.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:11.622 "adrfam": "ipv4", 00:10:11.622 "trsvcid": "$NVMF_PORT", 00:10:11.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:11.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:11.622 "hdgst": ${hdgst:-false}, 00:10:11.622 "ddgst": ${ddgst:-false} 00:10:11.622 }, 00:10:11.622 "method": "bdev_nvme_attach_controller" 00:10:11.622 } 00:10:11.622 EOF 00:10:11.622 )") 00:10:11.622 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:11.622 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:11.622 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:11.622 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:11.622 "params": { 00:10:11.622 "name": "Nvme1", 00:10:11.622 "trtype": "tcp", 00:10:11.622 "traddr": "10.0.0.3", 00:10:11.622 "adrfam": "ipv4", 00:10:11.622 "trsvcid": "4420", 00:10:11.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:11.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:11.622 "hdgst": false, 00:10:11.622 "ddgst": false 00:10:11.622 }, 00:10:11.622 "method": "bdev_nvme_attach_controller" 00:10:11.622 }' 00:10:11.622 [2024-12-11 13:51:04.557151] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:10:11.622 [2024-12-11 13:51:04.557235] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68169 ] 00:10:11.880 [2024-12-11 13:51:04.703445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:11.880 [2024-12-11 13:51:04.782006] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.880 [2024-12-11 13:51:04.782129] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.880 [2024-12-11 13:51:04.782144] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.880 [2024-12-11 13:51:04.845579] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:12.141 I/O targets: 00:10:12.141 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:12.141 00:10:12.141 00:10:12.141 CUnit - A unit testing framework for C - Version 2.1-3 00:10:12.141 http://cunit.sourceforge.net/ 00:10:12.141 00:10:12.141 00:10:12.141 Suite: bdevio tests on: Nvme1n1 00:10:12.141 Test: blockdev write read block ...passed 00:10:12.141 Test: blockdev write zeroes read block ...passed 00:10:12.141 Test: blockdev write zeroes read no split ...passed 00:10:12.141 Test: blockdev write zeroes read split ...passed 00:10:12.141 Test: blockdev write zeroes read split partial ...passed 00:10:12.141 Test: blockdev reset ...[2024-12-11 13:51:04.998739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:12.141 [2024-12-11 13:51:04.998991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f4b30 (9): Bad file descriptor 00:10:12.141 [2024-12-11 13:51:05.014685] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:12.141 passed 00:10:12.141 Test: blockdev write read 8 blocks ...passed 00:10:12.141 Test: blockdev write read size > 128k ...passed 00:10:12.141 Test: blockdev write read invalid size ...passed 00:10:12.141 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:12.141 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:12.141 Test: blockdev write read max offset ...passed 00:10:12.141 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:12.141 Test: blockdev writev readv 8 blocks ...passed 00:10:12.141 Test: blockdev writev readv 30 x 1block ...passed 00:10:12.141 Test: blockdev writev readv block ...passed 00:10:12.141 Test: blockdev writev readv size > 128k ...passed 00:10:12.141 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:12.141 Test: blockdev comparev and writev ...[2024-12-11 13:51:05.025871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.141 [2024-12-11 13:51:05.025917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:12.141 [2024-12-11 13:51:05.025940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.141 [2024-12-11 13:51:05.025960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:12.141 [2024-12-11 13:51:05.026252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.141 [2024-12-11 13:51:05.026270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:12.141 [2024-12-11 13:51:05.026287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.141 [2024-12-11 13:51:05.026298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:12.141 [2024-12-11 13:51:05.026592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.142 [2024-12-11 13:51:05.026615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:12.142 [2024-12-11 13:51:05.026633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.142 [2024-12-11 13:51:05.026650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:12.142 [2024-12-11 13:51:05.027147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.142 [2024-12-11 13:51:05.027183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:12.142 [2024-12-11 13:51:05.027203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.142 [2024-12-11 13:51:05.027213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:12.142 passed 00:10:12.142 Test: blockdev nvme passthru rw ...passed 00:10:12.142 Test: blockdev nvme passthru vendor specific ...[2024-12-11 13:51:05.028312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:12.142 [2024-12-11 13:51:05.028350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:12.142 [2024-12-11 13:51:05.028564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:12.142 [2024-12-11 13:51:05.028584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:12.142 [2024-12-11 13:51:05.028695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:12.142 passed 00:10:12.142 Test: blockdev nvme admin passthru ...[2024-12-11 13:51:05.028731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:12.142 [2024-12-11 13:51:05.028843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:12.142 [2024-12-11 13:51:05.028860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:12.142 passed 00:10:12.142 Test: blockdev copy ...passed 00:10:12.142 00:10:12.142 Run Summary: Type Total Ran Passed Failed Inactive 00:10:12.142 suites 1 1 n/a 0 0 00:10:12.142 tests 23 23 23 0 0 00:10:12.142 asserts 152 152 152 0 n/a 00:10:12.142 00:10:12.142 Elapsed time = 0.145 seconds 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:12.399 rmmod nvme_tcp 00:10:12.399 rmmod nvme_fabrics 00:10:12.399 rmmod nvme_keyring 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 68136 ']' 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 68136 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 68136 ']' 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 68136 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68136 00:10:12.399 killing process with pid 68136 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68136' 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 68136 00:10:12.399 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 68136 00:10:12.656 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:12.656 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:12.656 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:12.656 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:12.656 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:12.656 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:12.656 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:12.656 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:12.656 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:12.656 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:12.656 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:12.656 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:12.656 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:12.656 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:12.656 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:12.656 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:12.914 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:12.914 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:12.914 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:12.914 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:12.914 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:12.914 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:12.914 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:12.914 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.914 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.914 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.914 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:10:12.914 00:10:12.914 real 0m2.544s 00:10:12.914 user 0m6.768s 00:10:12.914 sys 0m0.823s 00:10:12.914 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.914 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:12.914 ************************************ 00:10:12.914 END TEST nvmf_bdevio 00:10:12.914 ************************************ 00:10:12.914 13:51:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:12.914 00:10:12.914 real 2m32.934s 00:10:12.914 user 6m39.451s 00:10:12.914 sys 0m52.030s 00:10:12.914 13:51:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.914 13:51:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:12.914 ************************************ 00:10:12.914 END TEST nvmf_target_core 00:10:12.914 ************************************ 00:10:13.172 13:51:05 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:13.172 13:51:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:13.172 13:51:05 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.172 13:51:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:13.172 ************************************ 00:10:13.172 START TEST nvmf_target_extra 00:10:13.172 ************************************ 00:10:13.172 13:51:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:13.172 * Looking for test storage... 00:10:13.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:13.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.172 --rc genhtml_branch_coverage=1 00:10:13.172 --rc genhtml_function_coverage=1 00:10:13.172 --rc genhtml_legend=1 00:10:13.172 --rc geninfo_all_blocks=1 00:10:13.172 --rc geninfo_unexecuted_blocks=1 00:10:13.172 00:10:13.172 ' 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:13.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.172 --rc genhtml_branch_coverage=1 00:10:13.172 --rc genhtml_function_coverage=1 00:10:13.172 --rc genhtml_legend=1 00:10:13.172 --rc geninfo_all_blocks=1 00:10:13.172 --rc geninfo_unexecuted_blocks=1 00:10:13.172 00:10:13.172 ' 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:13.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.172 --rc genhtml_branch_coverage=1 00:10:13.172 --rc genhtml_function_coverage=1 00:10:13.172 --rc genhtml_legend=1 00:10:13.172 --rc geninfo_all_blocks=1 00:10:13.172 --rc geninfo_unexecuted_blocks=1 00:10:13.172 00:10:13.172 ' 00:10:13.172 13:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:13.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.172 --rc genhtml_branch_coverage=1 00:10:13.173 --rc genhtml_function_coverage=1 00:10:13.173 --rc genhtml_legend=1 00:10:13.173 --rc geninfo_all_blocks=1 00:10:13.173 --rc geninfo_unexecuted_blocks=1 00:10:13.173 00:10:13.173 ' 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:13.173 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:13.173 ************************************ 00:10:13.173 START TEST nvmf_auth_target 00:10:13.173 ************************************ 00:10:13.173 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:13.436 * Looking for test storage... 00:10:13.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:13.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.436 --rc genhtml_branch_coverage=1 00:10:13.436 --rc genhtml_function_coverage=1 00:10:13.436 --rc genhtml_legend=1 00:10:13.436 --rc geninfo_all_blocks=1 00:10:13.436 --rc geninfo_unexecuted_blocks=1 00:10:13.436 00:10:13.436 ' 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:13.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.436 --rc genhtml_branch_coverage=1 00:10:13.436 --rc genhtml_function_coverage=1 00:10:13.436 --rc genhtml_legend=1 00:10:13.436 --rc geninfo_all_blocks=1 00:10:13.436 --rc geninfo_unexecuted_blocks=1 00:10:13.436 00:10:13.436 ' 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:13.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.436 --rc genhtml_branch_coverage=1 00:10:13.436 --rc genhtml_function_coverage=1 00:10:13.436 --rc genhtml_legend=1 00:10:13.436 --rc geninfo_all_blocks=1 00:10:13.436 --rc geninfo_unexecuted_blocks=1 00:10:13.436 00:10:13.436 ' 00:10:13.436 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:13.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.436 --rc genhtml_branch_coverage=1 00:10:13.436 --rc genhtml_function_coverage=1 00:10:13.436 --rc genhtml_legend=1 00:10:13.437 --rc geninfo_all_blocks=1 00:10:13.437 --rc geninfo_unexecuted_blocks=1 00:10:13.437 00:10:13.437 ' 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:13.437 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.437 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:13.438 Cannot find device "nvmf_init_br" 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:13.438 Cannot find device "nvmf_init_br2" 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:13.438 Cannot find device "nvmf_tgt_br" 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:13.438 Cannot find device "nvmf_tgt_br2" 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:13.438 Cannot find device "nvmf_init_br" 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:13.438 Cannot find device "nvmf_init_br2" 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:13.438 Cannot find device "nvmf_tgt_br" 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:10:13.438 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:13.438 Cannot find device "nvmf_tgt_br2" 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:13.700 Cannot find device "nvmf_br" 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:13.700 Cannot find device "nvmf_init_if" 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:13.700 Cannot find device "nvmf_init_if2" 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:13.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:13.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:13.700 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:13.701 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:13.701 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:13.701 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:10:13.701 00:10:13.701 --- 10.0.0.3 ping statistics --- 00:10:13.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.701 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:13.701 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:13.701 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:13.701 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:10:13.701 00:10:13.701 --- 10.0.0.4 ping statistics --- 00:10:13.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.701 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:10:13.701 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:13.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:10:13.701 00:10:13.701 --- 10.0.0.1 ping statistics --- 00:10:13.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.701 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:10:13.701 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:13.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:10:13.701 00:10:13.701 --- 10.0.0.2 ping statistics --- 00:10:13.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.701 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:10:13.701 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.701 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:10:13.701 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:13.701 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.701 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:13.701 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:13.701 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.701 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:13.701 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:13.959 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:10:13.959 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:13.959 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:13.959 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.959 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=68453 00:10:13.959 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:13.959 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 68453 00:10:13.959 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 68453 ']' 00:10:13.959 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.959 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.959 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.959 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.959 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=68472 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5c4468ee25ace693dddbfb0bb5b888377f15a5ca8b1884df 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.KgS 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5c4468ee25ace693dddbfb0bb5b888377f15a5ca8b1884df 0 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5c4468ee25ace693dddbfb0bb5b888377f15a5ca8b1884df 0 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5c4468ee25ace693dddbfb0bb5b888377f15a5ca8b1884df 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:10:14.219 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.KgS 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.KgS 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.KgS 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3b19fa30d61f0db6b9641eab1ee5c54e1bf65cf587245c124967cdaeaa850da0 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Jpo 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3b19fa30d61f0db6b9641eab1ee5c54e1bf65cf587245c124967cdaeaa850da0 3 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3b19fa30d61f0db6b9641eab1ee5c54e1bf65cf587245c124967cdaeaa850da0 3 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3b19fa30d61f0db6b9641eab1ee5c54e1bf65cf587245c124967cdaeaa850da0 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Jpo 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Jpo 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Jpo 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=49a8c1e299aa014aebf1578d3aee3169 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Py4 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 49a8c1e299aa014aebf1578d3aee3169 1 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 49a8c1e299aa014aebf1578d3aee3169 1 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=49a8c1e299aa014aebf1578d3aee3169 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Py4 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Py4 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Py4 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8a7c298a3389ea0e676ca81603cb58055ba562f4819e9050 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.k8R 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8a7c298a3389ea0e676ca81603cb58055ba562f4819e9050 2 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8a7c298a3389ea0e676ca81603cb58055ba562f4819e9050 2 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8a7c298a3389ea0e676ca81603cb58055ba562f4819e9050 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.k8R 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.k8R 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.k8R 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6c9d6f4effb8de2bb5bcf3f9af4c6d3b5a74949566272767 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.xqV 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6c9d6f4effb8de2bb5bcf3f9af4c6d3b5a74949566272767 2 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6c9d6f4effb8de2bb5bcf3f9af4c6d3b5a74949566272767 2 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6c9d6f4effb8de2bb5bcf3f9af4c6d3b5a74949566272767 00:10:14.478 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:14.479 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.xqV 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.xqV 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.xqV 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=02411b0c745e7321160c00160476759e 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ZW4 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 02411b0c745e7321160c00160476759e 1 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 02411b0c745e7321160c00160476759e 1 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=02411b0c745e7321160c00160476759e 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ZW4 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ZW4 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.ZW4 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=970389e3792db477fedc359b271c9737f5079e64cf473402574afd19b70a9132 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.yZW 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 970389e3792db477fedc359b271c9737f5079e64cf473402574afd19b70a9132 3 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 970389e3792db477fedc359b271c9737f5079e64cf473402574afd19b70a9132 3 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=970389e3792db477fedc359b271c9737f5079e64cf473402574afd19b70a9132 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.yZW 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.yZW 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.yZW 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 68453 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 68453 ']' 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.737 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.996 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.996 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:14.996 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 68472 /var/tmp/host.sock 00:10:14.996 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 68472 ']' 00:10:14.996 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:10:14.996 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:14.996 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:14.996 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.996 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.563 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.563 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:15.563 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:10:15.563 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.563 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.563 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.563 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:15.563 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.KgS 00:10:15.563 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.563 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.563 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.563 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.KgS 00:10:15.563 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.KgS 00:10:15.822 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Jpo ]] 00:10:15.822 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Jpo 00:10:15.822 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.822 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.822 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.822 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Jpo 00:10:15.822 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Jpo 00:10:16.080 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:16.080 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Py4 00:10:16.080 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.080 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.080 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.080 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Py4 00:10:16.080 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Py4 00:10:16.339 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.k8R ]] 00:10:16.339 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k8R 00:10:16.339 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.339 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.339 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.339 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k8R 00:10:16.339 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k8R 00:10:16.612 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:16.612 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.xqV 00:10:16.612 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.612 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.612 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.612 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.xqV 00:10:16.612 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.xqV 00:10:16.871 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.ZW4 ]] 00:10:16.871 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZW4 00:10:16.871 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.871 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.871 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.871 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZW4 00:10:16.871 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZW4 00:10:17.129 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:17.130 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.yZW 00:10:17.130 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.130 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.130 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.130 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.yZW 00:10:17.130 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.yZW 00:10:17.388 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:10:17.388 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:17.388 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:17.388 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:17.388 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:17.388 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:17.647 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:10:17.647 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:17.647 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:17.647 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:17.647 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:17.647 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:17.647 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.647 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.647 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.647 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.647 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.647 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.647 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:18.214 00:10:18.214 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:18.214 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:18.214 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:18.473 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:18.473 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:18.473 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.473 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.473 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.473 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:18.473 { 00:10:18.473 "cntlid": 1, 00:10:18.473 "qid": 0, 00:10:18.473 "state": "enabled", 00:10:18.473 "thread": "nvmf_tgt_poll_group_000", 00:10:18.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:10:18.473 "listen_address": { 00:10:18.473 "trtype": "TCP", 00:10:18.473 "adrfam": "IPv4", 00:10:18.473 "traddr": "10.0.0.3", 00:10:18.473 "trsvcid": "4420" 00:10:18.473 }, 00:10:18.473 "peer_address": { 00:10:18.473 "trtype": "TCP", 00:10:18.473 "adrfam": "IPv4", 00:10:18.473 "traddr": "10.0.0.1", 00:10:18.473 "trsvcid": "38214" 00:10:18.473 }, 00:10:18.473 "auth": { 00:10:18.473 "state": "completed", 00:10:18.473 "digest": "sha256", 00:10:18.473 "dhgroup": "null" 00:10:18.473 } 00:10:18.473 } 00:10:18.473 ]' 00:10:18.473 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:18.473 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:18.473 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:18.473 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:18.473 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:18.473 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.473 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.473 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:19.039 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:10:19.039 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:10:23.228 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:23.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:23.228 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:23.228 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.228 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.228 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.228 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:23.228 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:23.228 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:23.487 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:10:23.487 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:23.487 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:23.487 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:23.487 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:23.487 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:23.487 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:23.487 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.487 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.487 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.487 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:23.487 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:23.487 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:23.745 00:10:24.004 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:24.004 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:24.004 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.262 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.262 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.262 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.262 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.262 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.262 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:24.262 { 00:10:24.262 "cntlid": 3, 00:10:24.262 "qid": 0, 00:10:24.262 "state": "enabled", 00:10:24.262 "thread": "nvmf_tgt_poll_group_000", 00:10:24.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:10:24.262 "listen_address": { 00:10:24.262 "trtype": "TCP", 00:10:24.262 "adrfam": "IPv4", 00:10:24.262 "traddr": "10.0.0.3", 00:10:24.262 "trsvcid": "4420" 00:10:24.262 }, 00:10:24.262 "peer_address": { 00:10:24.262 "trtype": "TCP", 00:10:24.262 "adrfam": "IPv4", 00:10:24.262 "traddr": "10.0.0.1", 00:10:24.262 "trsvcid": "35632" 00:10:24.262 }, 00:10:24.262 "auth": { 00:10:24.262 "state": "completed", 00:10:24.262 "digest": "sha256", 00:10:24.262 "dhgroup": "null" 00:10:24.262 } 00:10:24.262 } 00:10:24.262 ]' 00:10:24.262 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:24.262 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:24.263 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:24.263 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:24.263 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:24.263 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:24.263 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:24.263 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:24.521 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:10:24.521 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:10:25.456 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:25.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:25.456 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:25.456 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.456 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.456 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.456 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:25.456 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:25.456 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:25.714 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:10:25.714 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:25.714 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:25.714 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:25.714 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:25.715 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:25.715 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.715 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.715 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.715 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.715 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.715 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.715 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.973 00:10:25.973 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:25.973 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:25.973 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:26.231 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:26.231 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:26.231 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.231 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.231 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.231 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:26.231 { 00:10:26.231 "cntlid": 5, 00:10:26.231 "qid": 0, 00:10:26.231 "state": "enabled", 00:10:26.231 "thread": "nvmf_tgt_poll_group_000", 00:10:26.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:10:26.231 "listen_address": { 00:10:26.231 "trtype": "TCP", 00:10:26.231 "adrfam": "IPv4", 00:10:26.231 "traddr": "10.0.0.3", 00:10:26.231 "trsvcid": "4420" 00:10:26.231 }, 00:10:26.231 "peer_address": { 00:10:26.231 "trtype": "TCP", 00:10:26.231 "adrfam": "IPv4", 00:10:26.231 "traddr": "10.0.0.1", 00:10:26.231 "trsvcid": "35654" 00:10:26.231 }, 00:10:26.231 "auth": { 00:10:26.231 "state": "completed", 00:10:26.231 "digest": "sha256", 00:10:26.231 "dhgroup": "null" 00:10:26.231 } 00:10:26.231 } 00:10:26.231 ]' 00:10:26.231 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:26.490 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:26.490 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:26.490 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:26.490 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:26.490 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:26.490 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:26.490 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:26.748 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:10:26.748 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:10:27.315 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:27.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:27.315 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:27.315 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.315 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.573 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.573 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:27.573 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:27.573 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:27.831 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:10:27.831 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:27.831 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:27.831 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:27.831 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:27.831 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:27.831 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key3 00:10:27.831 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.831 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.831 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.831 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:27.831 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:27.831 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:28.090 00:10:28.090 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:28.090 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:28.090 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.349 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:28.349 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:28.349 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.349 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.349 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.349 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:28.349 { 00:10:28.349 "cntlid": 7, 00:10:28.349 "qid": 0, 00:10:28.349 "state": "enabled", 00:10:28.349 "thread": "nvmf_tgt_poll_group_000", 00:10:28.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:10:28.349 "listen_address": { 00:10:28.349 "trtype": "TCP", 00:10:28.349 "adrfam": "IPv4", 00:10:28.349 "traddr": "10.0.0.3", 00:10:28.349 "trsvcid": "4420" 00:10:28.349 }, 00:10:28.349 "peer_address": { 00:10:28.349 "trtype": "TCP", 00:10:28.349 "adrfam": "IPv4", 00:10:28.349 "traddr": "10.0.0.1", 00:10:28.349 "trsvcid": "35694" 00:10:28.349 }, 00:10:28.349 "auth": { 00:10:28.349 "state": "completed", 00:10:28.349 "digest": "sha256", 00:10:28.349 "dhgroup": "null" 00:10:28.349 } 00:10:28.349 } 00:10:28.349 ]' 00:10:28.349 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:28.349 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:28.349 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:28.607 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:28.607 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:28.607 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:28.607 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:28.607 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:28.865 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:10:28.865 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:10:29.433 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:29.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:29.433 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:29.433 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.433 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.433 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.433 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:29.433 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:29.433 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:29.433 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:30.000 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:10:30.000 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:30.000 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:30.000 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:30.000 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:30.000 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.000 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.001 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.001 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.001 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.001 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.001 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.001 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.260 00:10:30.260 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:30.260 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:30.260 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.518 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.518 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.518 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.518 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.518 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.518 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:30.518 { 00:10:30.518 "cntlid": 9, 00:10:30.518 "qid": 0, 00:10:30.518 "state": "enabled", 00:10:30.518 "thread": "nvmf_tgt_poll_group_000", 00:10:30.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:10:30.518 "listen_address": { 00:10:30.518 "trtype": "TCP", 00:10:30.518 "adrfam": "IPv4", 00:10:30.518 "traddr": "10.0.0.3", 00:10:30.518 "trsvcid": "4420" 00:10:30.518 }, 00:10:30.518 "peer_address": { 00:10:30.518 "trtype": "TCP", 00:10:30.518 "adrfam": "IPv4", 00:10:30.518 "traddr": "10.0.0.1", 00:10:30.518 "trsvcid": "35712" 00:10:30.518 }, 00:10:30.518 "auth": { 00:10:30.518 "state": "completed", 00:10:30.518 "digest": "sha256", 00:10:30.518 "dhgroup": "ffdhe2048" 00:10:30.518 } 00:10:30.518 } 00:10:30.518 ]' 00:10:30.518 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:30.518 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:30.518 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:30.776 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:30.776 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:30.776 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.776 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.776 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:31.034 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:10:31.035 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:10:31.600 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:31.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:31.858 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:31.858 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.858 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.858 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.858 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:31.858 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:31.858 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:32.117 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:10:32.117 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:32.117 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:32.117 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:32.117 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:32.117 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:32.117 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:32.117 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.117 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.117 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.117 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:32.117 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:32.117 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:32.374 00:10:32.374 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:32.374 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:32.374 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.633 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.633 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.633 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.633 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.633 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.633 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:32.633 { 00:10:32.633 "cntlid": 11, 00:10:32.633 "qid": 0, 00:10:32.633 "state": "enabled", 00:10:32.633 "thread": "nvmf_tgt_poll_group_000", 00:10:32.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:10:32.633 "listen_address": { 00:10:32.633 "trtype": "TCP", 00:10:32.633 "adrfam": "IPv4", 00:10:32.633 "traddr": "10.0.0.3", 00:10:32.633 "trsvcid": "4420" 00:10:32.633 }, 00:10:32.633 "peer_address": { 00:10:32.633 "trtype": "TCP", 00:10:32.633 "adrfam": "IPv4", 00:10:32.633 "traddr": "10.0.0.1", 00:10:32.633 "trsvcid": "35746" 00:10:32.633 }, 00:10:32.633 "auth": { 00:10:32.633 "state": "completed", 00:10:32.633 "digest": "sha256", 00:10:32.633 "dhgroup": "ffdhe2048" 00:10:32.633 } 00:10:32.633 } 00:10:32.633 ]' 00:10:32.633 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:32.633 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:32.633 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:32.890 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:32.890 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:32.890 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.890 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.890 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.148 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:10:33.148 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:10:33.715 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.715 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:33.715 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.715 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.715 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.715 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:33.715 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:33.715 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:33.973 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:10:33.973 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:33.973 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:33.973 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:33.973 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:33.973 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.974 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.974 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.974 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.974 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.974 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.974 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.974 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:34.540 00:10:34.540 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:34.540 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:34.540 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.797 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.797 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.797 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.797 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.797 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.797 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:34.797 { 00:10:34.797 "cntlid": 13, 00:10:34.797 "qid": 0, 00:10:34.797 "state": "enabled", 00:10:34.797 "thread": "nvmf_tgt_poll_group_000", 00:10:34.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:10:34.798 "listen_address": { 00:10:34.798 "trtype": "TCP", 00:10:34.798 "adrfam": "IPv4", 00:10:34.798 "traddr": "10.0.0.3", 00:10:34.798 "trsvcid": "4420" 00:10:34.798 }, 00:10:34.798 "peer_address": { 00:10:34.798 "trtype": "TCP", 00:10:34.798 "adrfam": "IPv4", 00:10:34.798 "traddr": "10.0.0.1", 00:10:34.798 "trsvcid": "48212" 00:10:34.798 }, 00:10:34.798 "auth": { 00:10:34.798 "state": "completed", 00:10:34.798 "digest": "sha256", 00:10:34.798 "dhgroup": "ffdhe2048" 00:10:34.798 } 00:10:34.798 } 00:10:34.798 ]' 00:10:34.798 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:34.798 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:34.798 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:34.798 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:34.798 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:35.056 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.056 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.056 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.314 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:10:35.314 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:10:35.880 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.880 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:35.880 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.880 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.880 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.880 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:35.880 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:35.880 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:36.138 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:10:36.138 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:36.138 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:36.138 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:36.138 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:36.138 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.138 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key3 00:10:36.138 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.138 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.138 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.138 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:36.138 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:36.138 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:36.396 00:10:36.654 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:36.654 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:36.654 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.912 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.912 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.912 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.912 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.912 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.912 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:36.912 { 00:10:36.912 "cntlid": 15, 00:10:36.912 "qid": 0, 00:10:36.912 "state": "enabled", 00:10:36.912 "thread": "nvmf_tgt_poll_group_000", 00:10:36.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:10:36.912 "listen_address": { 00:10:36.912 "trtype": "TCP", 00:10:36.912 "adrfam": "IPv4", 00:10:36.912 "traddr": "10.0.0.3", 00:10:36.912 "trsvcid": "4420" 00:10:36.912 }, 00:10:36.912 "peer_address": { 00:10:36.912 "trtype": "TCP", 00:10:36.912 "adrfam": "IPv4", 00:10:36.912 "traddr": "10.0.0.1", 00:10:36.912 "trsvcid": "48246" 00:10:36.912 }, 00:10:36.912 "auth": { 00:10:36.912 "state": "completed", 00:10:36.912 "digest": "sha256", 00:10:36.912 "dhgroup": "ffdhe2048" 00:10:36.912 } 00:10:36.912 } 00:10:36.912 ]' 00:10:36.912 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:36.912 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:36.912 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:36.912 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:36.912 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:36.912 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:36.912 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:36.912 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.170 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:10:37.170 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:10:38.104 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.104 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:38.104 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.104 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.104 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.104 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:38.104 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:38.104 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:38.105 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:38.105 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:10:38.105 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:38.105 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:38.105 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:38.105 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:38.105 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.105 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.105 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.105 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.105 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.105 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.105 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.105 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.671 00:10:38.671 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:38.671 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.671 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:38.929 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.929 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.929 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.929 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.929 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.929 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:38.929 { 00:10:38.929 "cntlid": 17, 00:10:38.929 "qid": 0, 00:10:38.929 "state": "enabled", 00:10:38.929 "thread": "nvmf_tgt_poll_group_000", 00:10:38.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:10:38.929 "listen_address": { 00:10:38.929 "trtype": "TCP", 00:10:38.929 "adrfam": "IPv4", 00:10:38.929 "traddr": "10.0.0.3", 00:10:38.929 "trsvcid": "4420" 00:10:38.929 }, 00:10:38.929 "peer_address": { 00:10:38.929 "trtype": "TCP", 00:10:38.929 "adrfam": "IPv4", 00:10:38.929 "traddr": "10.0.0.1", 00:10:38.929 "trsvcid": "48268" 00:10:38.929 }, 00:10:38.929 "auth": { 00:10:38.929 "state": "completed", 00:10:38.929 "digest": "sha256", 00:10:38.929 "dhgroup": "ffdhe3072" 00:10:38.929 } 00:10:38.929 } 00:10:38.929 ]' 00:10:38.929 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:38.929 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:38.929 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:38.929 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:38.929 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:38.929 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.929 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.929 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:39.187 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:10:39.187 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:10:39.755 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:39.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:39.755 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:39.755 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.755 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.755 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.755 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:39.755 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:39.755 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:40.325 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:10:40.325 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:40.325 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:40.325 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:40.325 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:40.325 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.325 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.325 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.325 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.325 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.325 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.325 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.325 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.583 00:10:40.583 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:40.583 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.583 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:40.842 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.842 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.842 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.842 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.842 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.842 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:40.842 { 00:10:40.842 "cntlid": 19, 00:10:40.842 "qid": 0, 00:10:40.842 "state": "enabled", 00:10:40.842 "thread": "nvmf_tgt_poll_group_000", 00:10:40.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:10:40.842 "listen_address": { 00:10:40.842 "trtype": "TCP", 00:10:40.842 "adrfam": "IPv4", 00:10:40.842 "traddr": "10.0.0.3", 00:10:40.842 "trsvcid": "4420" 00:10:40.842 }, 00:10:40.842 "peer_address": { 00:10:40.842 "trtype": "TCP", 00:10:40.842 "adrfam": "IPv4", 00:10:40.842 "traddr": "10.0.0.1", 00:10:40.842 "trsvcid": "48304" 00:10:40.842 }, 00:10:40.842 "auth": { 00:10:40.842 "state": "completed", 00:10:40.842 "digest": "sha256", 00:10:40.842 "dhgroup": "ffdhe3072" 00:10:40.842 } 00:10:40.842 } 00:10:40.842 ]' 00:10:40.842 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:41.101 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:41.101 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:41.101 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:41.101 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:41.101 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.101 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.101 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.360 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:10:41.360 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:10:42.294 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.294 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:42.294 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.294 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.294 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.294 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:42.294 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:42.294 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:42.294 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:10:42.294 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:42.294 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:42.294 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:42.294 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:42.294 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.294 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.294 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.294 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.294 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.294 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.294 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.294 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.860 00:10:42.860 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:42.860 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:42.860 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:43.119 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.119 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.119 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.119 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.119 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.119 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:43.119 { 00:10:43.119 "cntlid": 21, 00:10:43.119 "qid": 0, 00:10:43.119 "state": "enabled", 00:10:43.119 "thread": "nvmf_tgt_poll_group_000", 00:10:43.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:10:43.119 "listen_address": { 00:10:43.119 "trtype": "TCP", 00:10:43.119 "adrfam": "IPv4", 00:10:43.119 "traddr": "10.0.0.3", 00:10:43.119 "trsvcid": "4420" 00:10:43.119 }, 00:10:43.119 "peer_address": { 00:10:43.119 "trtype": "TCP", 00:10:43.119 "adrfam": "IPv4", 00:10:43.119 "traddr": "10.0.0.1", 00:10:43.119 "trsvcid": "48326" 00:10:43.119 }, 00:10:43.119 "auth": { 00:10:43.119 "state": "completed", 00:10:43.119 "digest": "sha256", 00:10:43.119 "dhgroup": "ffdhe3072" 00:10:43.119 } 00:10:43.119 } 00:10:43.119 ]' 00:10:43.119 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:43.119 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:43.119 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:43.119 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:43.119 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:43.119 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.119 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.119 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.378 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:10:43.378 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:10:44.313 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:44.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:44.313 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:44.313 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.313 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.313 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.313 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:44.313 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:44.313 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:44.313 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:10:44.313 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:44.313 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:44.313 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:44.313 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:44.313 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.313 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key3 00:10:44.313 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.313 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.572 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.572 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:44.572 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:44.572 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:44.831 00:10:44.831 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:44.831 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.831 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:45.089 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.089 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.089 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.089 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.089 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.089 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:45.089 { 00:10:45.089 "cntlid": 23, 00:10:45.089 "qid": 0, 00:10:45.089 "state": "enabled", 00:10:45.089 "thread": "nvmf_tgt_poll_group_000", 00:10:45.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:10:45.089 "listen_address": { 00:10:45.089 "trtype": "TCP", 00:10:45.089 "adrfam": "IPv4", 00:10:45.089 "traddr": "10.0.0.3", 00:10:45.089 "trsvcid": "4420" 00:10:45.089 }, 00:10:45.089 "peer_address": { 00:10:45.089 "trtype": "TCP", 00:10:45.090 "adrfam": "IPv4", 00:10:45.090 "traddr": "10.0.0.1", 00:10:45.090 "trsvcid": "48640" 00:10:45.090 }, 00:10:45.090 "auth": { 00:10:45.090 "state": "completed", 00:10:45.090 "digest": "sha256", 00:10:45.090 "dhgroup": "ffdhe3072" 00:10:45.090 } 00:10:45.090 } 00:10:45.090 ]' 00:10:45.090 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:45.090 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:45.090 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:45.090 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:45.090 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:45.348 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.348 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.348 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.607 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:10:45.607 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:10:46.177 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:46.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:46.177 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:46.177 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.177 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.177 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.177 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:46.177 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:46.177 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:46.177 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:46.439 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:10:46.439 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:46.439 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:46.439 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:46.439 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:46.440 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.440 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.440 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.440 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.440 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.440 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.440 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.440 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.697 00:10:46.697 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:46.697 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:46.697 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.956 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.956 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.956 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.956 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.956 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.956 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:46.956 { 00:10:46.956 "cntlid": 25, 00:10:46.956 "qid": 0, 00:10:46.956 "state": "enabled", 00:10:46.956 "thread": "nvmf_tgt_poll_group_000", 00:10:46.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:10:46.956 "listen_address": { 00:10:46.956 "trtype": "TCP", 00:10:46.956 "adrfam": "IPv4", 00:10:46.956 "traddr": "10.0.0.3", 00:10:46.956 "trsvcid": "4420" 00:10:46.956 }, 00:10:46.956 "peer_address": { 00:10:46.956 "trtype": "TCP", 00:10:46.956 "adrfam": "IPv4", 00:10:46.956 "traddr": "10.0.0.1", 00:10:46.956 "trsvcid": "48662" 00:10:46.956 }, 00:10:46.956 "auth": { 00:10:46.956 "state": "completed", 00:10:46.957 "digest": "sha256", 00:10:46.957 "dhgroup": "ffdhe4096" 00:10:46.957 } 00:10:46.957 } 00:10:46.957 ]' 00:10:46.957 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:47.215 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:47.215 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:47.215 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:47.215 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:47.215 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:47.215 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:47.215 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.473 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:10:47.473 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:10:48.040 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.040 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:48.040 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.040 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.040 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.040 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:48.040 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:48.040 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:48.608 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:10:48.608 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:48.608 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:48.608 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:48.608 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:48.608 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.608 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.608 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.608 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.608 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.608 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.608 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.608 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.867 00:10:48.868 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:48.868 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:48.868 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.127 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.127 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.127 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.127 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.385 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.385 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:49.385 { 00:10:49.385 "cntlid": 27, 00:10:49.385 "qid": 0, 00:10:49.386 "state": "enabled", 00:10:49.386 "thread": "nvmf_tgt_poll_group_000", 00:10:49.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:10:49.386 "listen_address": { 00:10:49.386 "trtype": "TCP", 00:10:49.386 "adrfam": "IPv4", 00:10:49.386 "traddr": "10.0.0.3", 00:10:49.386 "trsvcid": "4420" 00:10:49.386 }, 00:10:49.386 "peer_address": { 00:10:49.386 "trtype": "TCP", 00:10:49.386 "adrfam": "IPv4", 00:10:49.386 "traddr": "10.0.0.1", 00:10:49.386 "trsvcid": "48692" 00:10:49.386 }, 00:10:49.386 "auth": { 00:10:49.386 "state": "completed", 00:10:49.386 "digest": "sha256", 00:10:49.386 "dhgroup": "ffdhe4096" 00:10:49.386 } 00:10:49.386 } 00:10:49.386 ]' 00:10:49.386 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:49.386 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:49.386 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:49.386 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:49.386 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:49.386 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.386 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.386 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.644 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:10:49.644 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:10:50.580 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.580 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:50.580 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.580 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.580 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.580 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:50.580 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:50.580 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:50.580 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:10:50.581 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:50.581 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:50.581 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:50.581 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:50.581 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.581 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:50.581 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.581 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.581 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.581 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:50.581 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:50.581 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:51.148 00:10:51.148 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:51.148 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.148 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:51.407 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.407 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.407 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.407 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.407 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.407 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:51.407 { 00:10:51.407 "cntlid": 29, 00:10:51.407 "qid": 0, 00:10:51.407 "state": "enabled", 00:10:51.407 "thread": "nvmf_tgt_poll_group_000", 00:10:51.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:10:51.407 "listen_address": { 00:10:51.407 "trtype": "TCP", 00:10:51.407 "adrfam": "IPv4", 00:10:51.407 "traddr": "10.0.0.3", 00:10:51.407 "trsvcid": "4420" 00:10:51.407 }, 00:10:51.407 "peer_address": { 00:10:51.407 "trtype": "TCP", 00:10:51.407 "adrfam": "IPv4", 00:10:51.407 "traddr": "10.0.0.1", 00:10:51.407 "trsvcid": "48728" 00:10:51.407 }, 00:10:51.407 "auth": { 00:10:51.407 "state": "completed", 00:10:51.407 "digest": "sha256", 00:10:51.407 "dhgroup": "ffdhe4096" 00:10:51.407 } 00:10:51.407 } 00:10:51.407 ]' 00:10:51.407 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:51.407 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:51.407 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:51.407 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:51.407 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:51.407 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.407 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.407 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.666 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:10:51.666 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:10:52.243 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.243 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:52.243 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.243 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.243 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.243 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:52.243 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:52.243 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:52.812 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:10:52.812 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.812 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:52.812 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:52.812 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:52.812 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.812 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key3 00:10:52.812 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.812 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.812 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.812 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:52.812 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:52.812 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:53.071 00:10:53.071 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:53.071 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:53.071 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.330 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.330 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.330 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.330 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.330 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.330 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:53.330 { 00:10:53.330 "cntlid": 31, 00:10:53.330 "qid": 0, 00:10:53.330 "state": "enabled", 00:10:53.330 "thread": "nvmf_tgt_poll_group_000", 00:10:53.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:10:53.330 "listen_address": { 00:10:53.330 "trtype": "TCP", 00:10:53.330 "adrfam": "IPv4", 00:10:53.330 "traddr": "10.0.0.3", 00:10:53.330 "trsvcid": "4420" 00:10:53.330 }, 00:10:53.330 "peer_address": { 00:10:53.330 "trtype": "TCP", 00:10:53.330 "adrfam": "IPv4", 00:10:53.330 "traddr": "10.0.0.1", 00:10:53.330 "trsvcid": "35228" 00:10:53.330 }, 00:10:53.330 "auth": { 00:10:53.330 "state": "completed", 00:10:53.330 "digest": "sha256", 00:10:53.330 "dhgroup": "ffdhe4096" 00:10:53.330 } 00:10:53.330 } 00:10:53.330 ]' 00:10:53.330 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:53.330 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:53.589 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:53.589 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:53.589 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:53.589 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.589 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.589 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.848 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:10:53.848 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:10:54.416 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.416 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:54.416 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.416 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.416 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.416 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:54.416 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:54.416 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:54.416 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:54.674 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:10:54.674 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:54.933 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:54.933 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:54.933 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:54.933 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.933 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.933 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.933 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.933 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.933 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.933 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.933 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:55.191 00:10:55.450 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:55.450 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.450 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:55.450 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.450 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.450 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.450 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.709 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.709 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:55.709 { 00:10:55.709 "cntlid": 33, 00:10:55.709 "qid": 0, 00:10:55.709 "state": "enabled", 00:10:55.709 "thread": "nvmf_tgt_poll_group_000", 00:10:55.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:10:55.709 "listen_address": { 00:10:55.709 "trtype": "TCP", 00:10:55.709 "adrfam": "IPv4", 00:10:55.709 "traddr": "10.0.0.3", 00:10:55.709 "trsvcid": "4420" 00:10:55.709 }, 00:10:55.709 "peer_address": { 00:10:55.709 "trtype": "TCP", 00:10:55.709 "adrfam": "IPv4", 00:10:55.709 "traddr": "10.0.0.1", 00:10:55.709 "trsvcid": "35256" 00:10:55.709 }, 00:10:55.709 "auth": { 00:10:55.709 "state": "completed", 00:10:55.709 "digest": "sha256", 00:10:55.709 "dhgroup": "ffdhe6144" 00:10:55.709 } 00:10:55.709 } 00:10:55.709 ]' 00:10:55.709 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:55.709 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:55.709 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:55.709 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:55.709 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:55.709 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.709 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.709 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.968 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:10:55.968 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:10:56.916 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.916 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:56.916 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.916 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.916 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.916 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:56.916 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:56.916 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:57.221 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:10:57.221 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:57.221 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:57.221 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:57.221 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:57.221 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.221 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.221 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.221 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.221 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.221 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.221 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.221 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.480 00:10:57.480 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:57.480 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.480 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:57.739 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.739 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.739 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.739 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.739 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.739 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:57.739 { 00:10:57.739 "cntlid": 35, 00:10:57.739 "qid": 0, 00:10:57.739 "state": "enabled", 00:10:57.739 "thread": "nvmf_tgt_poll_group_000", 00:10:57.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:10:57.739 "listen_address": { 00:10:57.739 "trtype": "TCP", 00:10:57.739 "adrfam": "IPv4", 00:10:57.739 "traddr": "10.0.0.3", 00:10:57.739 "trsvcid": "4420" 00:10:57.739 }, 00:10:57.739 "peer_address": { 00:10:57.739 "trtype": "TCP", 00:10:57.739 "adrfam": "IPv4", 00:10:57.739 "traddr": "10.0.0.1", 00:10:57.739 "trsvcid": "35288" 00:10:57.739 }, 00:10:57.739 "auth": { 00:10:57.739 "state": "completed", 00:10:57.739 "digest": "sha256", 00:10:57.739 "dhgroup": "ffdhe6144" 00:10:57.739 } 00:10:57.739 } 00:10:57.739 ]' 00:10:57.739 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:57.997 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:57.997 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:57.997 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:57.997 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:57.997 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.997 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.997 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.255 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:10:58.255 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:10:58.822 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.822 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:10:58.822 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.822 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.822 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.822 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:58.822 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:58.822 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:59.081 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:10:59.081 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:59.081 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:59.081 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:59.081 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:59.081 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.081 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:59.081 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.081 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.340 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.340 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:59.340 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:59.340 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:59.598 00:10:59.598 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:59.598 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:59.598 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:59.857 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.857 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.857 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.857 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.116 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.116 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:00.116 { 00:11:00.116 "cntlid": 37, 00:11:00.116 "qid": 0, 00:11:00.116 "state": "enabled", 00:11:00.116 "thread": "nvmf_tgt_poll_group_000", 00:11:00.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:00.116 "listen_address": { 00:11:00.116 "trtype": "TCP", 00:11:00.116 "adrfam": "IPv4", 00:11:00.116 "traddr": "10.0.0.3", 00:11:00.116 "trsvcid": "4420" 00:11:00.116 }, 00:11:00.116 "peer_address": { 00:11:00.116 "trtype": "TCP", 00:11:00.116 "adrfam": "IPv4", 00:11:00.116 "traddr": "10.0.0.1", 00:11:00.116 "trsvcid": "35310" 00:11:00.116 }, 00:11:00.116 "auth": { 00:11:00.116 "state": "completed", 00:11:00.116 "digest": "sha256", 00:11:00.116 "dhgroup": "ffdhe6144" 00:11:00.116 } 00:11:00.116 } 00:11:00.116 ]' 00:11:00.116 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:00.116 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:00.116 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:00.116 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:00.116 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:00.116 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.116 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.116 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.375 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:11:00.375 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:11:00.942 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.942 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:00.942 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.942 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.942 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.942 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:00.942 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:00.942 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:01.509 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:11:01.509 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:01.509 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:01.509 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:01.509 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:01.509 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.509 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key3 00:11:01.509 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.509 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.509 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.509 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:01.509 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:01.509 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:01.840 00:11:01.840 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:01.840 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:01.840 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.098 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.098 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.098 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.098 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.098 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.098 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:02.098 { 00:11:02.098 "cntlid": 39, 00:11:02.098 "qid": 0, 00:11:02.098 "state": "enabled", 00:11:02.098 "thread": "nvmf_tgt_poll_group_000", 00:11:02.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:02.098 "listen_address": { 00:11:02.098 "trtype": "TCP", 00:11:02.098 "adrfam": "IPv4", 00:11:02.098 "traddr": "10.0.0.3", 00:11:02.098 "trsvcid": "4420" 00:11:02.098 }, 00:11:02.098 "peer_address": { 00:11:02.098 "trtype": "TCP", 00:11:02.098 "adrfam": "IPv4", 00:11:02.098 "traddr": "10.0.0.1", 00:11:02.098 "trsvcid": "35346" 00:11:02.098 }, 00:11:02.098 "auth": { 00:11:02.098 "state": "completed", 00:11:02.098 "digest": "sha256", 00:11:02.098 "dhgroup": "ffdhe6144" 00:11:02.098 } 00:11:02.098 } 00:11:02.098 ]' 00:11:02.098 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:02.098 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:02.098 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:02.098 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:02.098 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:02.356 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.356 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.356 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.615 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:11:02.615 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:11:03.182 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.182 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:03.182 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.182 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.182 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.182 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:03.182 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:03.182 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:03.182 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:03.441 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:11:03.441 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:03.441 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:03.441 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:03.441 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:03.441 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.441 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:03.441 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.441 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.441 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.441 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:03.441 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:03.441 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.376 00:11:04.376 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:04.376 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:04.376 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.376 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.376 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.376 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.376 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.376 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.376 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:04.376 { 00:11:04.376 "cntlid": 41, 00:11:04.376 "qid": 0, 00:11:04.376 "state": "enabled", 00:11:04.376 "thread": "nvmf_tgt_poll_group_000", 00:11:04.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:04.376 "listen_address": { 00:11:04.376 "trtype": "TCP", 00:11:04.376 "adrfam": "IPv4", 00:11:04.376 "traddr": "10.0.0.3", 00:11:04.376 "trsvcid": "4420" 00:11:04.376 }, 00:11:04.376 "peer_address": { 00:11:04.376 "trtype": "TCP", 00:11:04.376 "adrfam": "IPv4", 00:11:04.376 "traddr": "10.0.0.1", 00:11:04.376 "trsvcid": "33970" 00:11:04.376 }, 00:11:04.376 "auth": { 00:11:04.376 "state": "completed", 00:11:04.376 "digest": "sha256", 00:11:04.376 "dhgroup": "ffdhe8192" 00:11:04.376 } 00:11:04.376 } 00:11:04.376 ]' 00:11:04.376 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:04.635 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:04.635 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:04.635 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:04.635 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:04.635 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.635 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.635 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.893 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:11:04.893 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:11:05.829 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.829 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:05.829 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.829 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.829 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.829 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:05.829 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:05.829 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:05.829 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:11:05.829 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:05.829 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:05.829 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:05.829 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:05.829 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.829 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:05.829 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.829 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.829 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.829 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:05.829 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:05.829 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.765 00:11:06.765 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:06.765 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.765 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:06.765 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.765 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.765 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.765 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.765 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.765 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:06.765 { 00:11:06.765 "cntlid": 43, 00:11:06.765 "qid": 0, 00:11:06.765 "state": "enabled", 00:11:06.765 "thread": "nvmf_tgt_poll_group_000", 00:11:06.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:06.765 "listen_address": { 00:11:06.765 "trtype": "TCP", 00:11:06.765 "adrfam": "IPv4", 00:11:06.765 "traddr": "10.0.0.3", 00:11:06.765 "trsvcid": "4420" 00:11:06.765 }, 00:11:06.765 "peer_address": { 00:11:06.765 "trtype": "TCP", 00:11:06.765 "adrfam": "IPv4", 00:11:06.765 "traddr": "10.0.0.1", 00:11:06.765 "trsvcid": "33990" 00:11:06.765 }, 00:11:06.766 "auth": { 00:11:06.766 "state": "completed", 00:11:06.766 "digest": "sha256", 00:11:06.766 "dhgroup": "ffdhe8192" 00:11:06.766 } 00:11:06.766 } 00:11:06.766 ]' 00:11:06.766 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:07.024 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:07.024 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:07.024 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:07.024 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:07.024 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.024 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.024 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.282 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:11:07.282 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:11:08.217 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.217 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:08.217 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.217 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.217 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.217 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:08.217 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:08.217 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:08.475 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:11:08.475 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:08.475 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:08.475 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:08.475 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:08.475 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.475 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.475 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.475 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.475 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.475 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.475 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.476 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.042 00:11:09.042 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:09.042 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:09.042 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.301 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.301 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.301 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.301 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.301 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.301 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:09.301 { 00:11:09.301 "cntlid": 45, 00:11:09.301 "qid": 0, 00:11:09.301 "state": "enabled", 00:11:09.301 "thread": "nvmf_tgt_poll_group_000", 00:11:09.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:09.301 "listen_address": { 00:11:09.301 "trtype": "TCP", 00:11:09.301 "adrfam": "IPv4", 00:11:09.301 "traddr": "10.0.0.3", 00:11:09.301 "trsvcid": "4420" 00:11:09.301 }, 00:11:09.301 "peer_address": { 00:11:09.301 "trtype": "TCP", 00:11:09.301 "adrfam": "IPv4", 00:11:09.301 "traddr": "10.0.0.1", 00:11:09.301 "trsvcid": "34010" 00:11:09.301 }, 00:11:09.301 "auth": { 00:11:09.301 "state": "completed", 00:11:09.301 "digest": "sha256", 00:11:09.301 "dhgroup": "ffdhe8192" 00:11:09.301 } 00:11:09.301 } 00:11:09.301 ]' 00:11:09.301 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:09.301 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:09.301 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:09.559 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:09.559 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:09.559 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.559 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.559 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.818 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:11:09.818 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:11:10.386 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.386 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:10.386 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.386 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.386 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.386 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:10.386 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:10.386 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:10.645 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:11:10.645 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.645 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:10.645 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:10.645 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:10.645 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.645 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key3 00:11:10.645 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.645 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.645 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.645 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:10.645 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:10.645 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:11.580 00:11:11.580 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:11.580 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.580 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:11.838 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.838 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.838 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.838 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.838 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.838 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:11.838 { 00:11:11.838 "cntlid": 47, 00:11:11.838 "qid": 0, 00:11:11.838 "state": "enabled", 00:11:11.838 "thread": "nvmf_tgt_poll_group_000", 00:11:11.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:11.838 "listen_address": { 00:11:11.838 "trtype": "TCP", 00:11:11.838 "adrfam": "IPv4", 00:11:11.838 "traddr": "10.0.0.3", 00:11:11.838 "trsvcid": "4420" 00:11:11.838 }, 00:11:11.838 "peer_address": { 00:11:11.838 "trtype": "TCP", 00:11:11.838 "adrfam": "IPv4", 00:11:11.838 "traddr": "10.0.0.1", 00:11:11.838 "trsvcid": "34044" 00:11:11.838 }, 00:11:11.838 "auth": { 00:11:11.838 "state": "completed", 00:11:11.838 "digest": "sha256", 00:11:11.838 "dhgroup": "ffdhe8192" 00:11:11.838 } 00:11:11.838 } 00:11:11.838 ]' 00:11:11.838 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:11.838 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:11.838 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:11.838 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:11.838 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:11.838 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.838 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.838 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.097 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:11:12.097 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:11:13.031 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.031 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:13.031 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.031 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.031 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.031 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:13.031 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:13.031 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:13.031 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:13.031 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:13.290 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:11:13.290 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:13.290 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:13.290 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:13.290 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:13.290 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.290 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.290 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.290 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.290 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.290 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.290 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.290 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.548 00:11:13.548 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:13.548 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.548 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:13.807 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.807 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.807 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.807 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.807 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.807 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:13.807 { 00:11:13.807 "cntlid": 49, 00:11:13.807 "qid": 0, 00:11:13.807 "state": "enabled", 00:11:13.807 "thread": "nvmf_tgt_poll_group_000", 00:11:13.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:13.807 "listen_address": { 00:11:13.807 "trtype": "TCP", 00:11:13.807 "adrfam": "IPv4", 00:11:13.807 "traddr": "10.0.0.3", 00:11:13.807 "trsvcid": "4420" 00:11:13.807 }, 00:11:13.807 "peer_address": { 00:11:13.807 "trtype": "TCP", 00:11:13.807 "adrfam": "IPv4", 00:11:13.807 "traddr": "10.0.0.1", 00:11:13.807 "trsvcid": "57748" 00:11:13.807 }, 00:11:13.807 "auth": { 00:11:13.807 "state": "completed", 00:11:13.807 "digest": "sha384", 00:11:13.807 "dhgroup": "null" 00:11:13.807 } 00:11:13.807 } 00:11:13.807 ]' 00:11:13.807 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:13.807 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:13.807 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:13.807 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:13.807 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:14.066 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.066 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.066 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.325 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:11:14.325 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:11:14.892 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.892 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:14.892 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.892 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.892 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.892 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:14.892 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:14.892 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:15.150 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:11:15.150 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:15.150 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:15.150 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:15.150 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:15.150 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.150 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.150 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.150 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.150 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.150 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.150 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.150 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.717 00:11:15.717 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:15.717 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:15.717 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.975 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.975 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.975 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.975 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.975 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.975 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:15.975 { 00:11:15.975 "cntlid": 51, 00:11:15.975 "qid": 0, 00:11:15.975 "state": "enabled", 00:11:15.975 "thread": "nvmf_tgt_poll_group_000", 00:11:15.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:15.975 "listen_address": { 00:11:15.975 "trtype": "TCP", 00:11:15.975 "adrfam": "IPv4", 00:11:15.975 "traddr": "10.0.0.3", 00:11:15.975 "trsvcid": "4420" 00:11:15.975 }, 00:11:15.975 "peer_address": { 00:11:15.975 "trtype": "TCP", 00:11:15.975 "adrfam": "IPv4", 00:11:15.975 "traddr": "10.0.0.1", 00:11:15.975 "trsvcid": "57754" 00:11:15.975 }, 00:11:15.975 "auth": { 00:11:15.975 "state": "completed", 00:11:15.975 "digest": "sha384", 00:11:15.975 "dhgroup": "null" 00:11:15.975 } 00:11:15.975 } 00:11:15.975 ]' 00:11:15.975 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:15.975 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:15.975 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:15.975 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:15.975 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:15.975 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.975 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.975 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.543 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:11:16.543 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:11:17.110 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.110 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:17.110 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.110 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.110 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.110 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:17.110 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:17.110 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:17.369 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:11:17.369 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:17.369 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:17.369 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:17.369 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:17.369 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.369 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.369 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.369 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.369 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.369 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.369 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.369 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.952 00:11:17.952 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:17.952 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.952 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.244 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.244 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.244 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.244 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.244 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.244 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:18.244 { 00:11:18.244 "cntlid": 53, 00:11:18.244 "qid": 0, 00:11:18.244 "state": "enabled", 00:11:18.244 "thread": "nvmf_tgt_poll_group_000", 00:11:18.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:18.244 "listen_address": { 00:11:18.244 "trtype": "TCP", 00:11:18.244 "adrfam": "IPv4", 00:11:18.244 "traddr": "10.0.0.3", 00:11:18.244 "trsvcid": "4420" 00:11:18.244 }, 00:11:18.244 "peer_address": { 00:11:18.244 "trtype": "TCP", 00:11:18.244 "adrfam": "IPv4", 00:11:18.244 "traddr": "10.0.0.1", 00:11:18.244 "trsvcid": "57778" 00:11:18.244 }, 00:11:18.244 "auth": { 00:11:18.244 "state": "completed", 00:11:18.244 "digest": "sha384", 00:11:18.244 "dhgroup": "null" 00:11:18.244 } 00:11:18.244 } 00:11:18.244 ]' 00:11:18.244 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:18.244 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:18.244 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:18.244 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:18.244 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:18.244 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.244 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.244 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.503 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:11:18.503 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:11:19.070 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.070 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:19.071 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.071 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.071 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.071 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:19.071 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:19.071 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:19.637 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:11:19.637 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:19.637 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:19.637 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:19.637 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:19.637 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.637 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key3 00:11:19.637 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.637 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.637 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.637 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:19.637 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:19.637 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:19.637 00:11:19.895 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:19.895 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.895 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:20.153 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.153 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.153 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.153 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.153 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.153 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:20.153 { 00:11:20.153 "cntlid": 55, 00:11:20.153 "qid": 0, 00:11:20.153 "state": "enabled", 00:11:20.153 "thread": "nvmf_tgt_poll_group_000", 00:11:20.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:20.153 "listen_address": { 00:11:20.153 "trtype": "TCP", 00:11:20.153 "adrfam": "IPv4", 00:11:20.153 "traddr": "10.0.0.3", 00:11:20.153 "trsvcid": "4420" 00:11:20.153 }, 00:11:20.153 "peer_address": { 00:11:20.153 "trtype": "TCP", 00:11:20.153 "adrfam": "IPv4", 00:11:20.153 "traddr": "10.0.0.1", 00:11:20.153 "trsvcid": "57806" 00:11:20.153 }, 00:11:20.153 "auth": { 00:11:20.153 "state": "completed", 00:11:20.153 "digest": "sha384", 00:11:20.153 "dhgroup": "null" 00:11:20.153 } 00:11:20.153 } 00:11:20.153 ]' 00:11:20.153 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:20.153 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:20.153 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:20.153 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:20.153 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:20.153 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.153 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.153 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.411 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:11:20.411 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:11:21.345 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.345 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:21.345 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.345 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.345 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.345 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:21.345 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:21.345 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:21.345 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:21.345 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:11:21.345 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:21.345 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:21.345 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:21.345 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:21.345 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.345 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.345 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.345 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.603 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.603 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.603 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.603 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.862 00:11:21.862 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.862 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.862 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.121 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.121 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.121 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.121 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.121 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.121 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:22.121 { 00:11:22.121 "cntlid": 57, 00:11:22.121 "qid": 0, 00:11:22.121 "state": "enabled", 00:11:22.121 "thread": "nvmf_tgt_poll_group_000", 00:11:22.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:22.121 "listen_address": { 00:11:22.121 "trtype": "TCP", 00:11:22.121 "adrfam": "IPv4", 00:11:22.121 "traddr": "10.0.0.3", 00:11:22.121 "trsvcid": "4420" 00:11:22.121 }, 00:11:22.121 "peer_address": { 00:11:22.121 "trtype": "TCP", 00:11:22.121 "adrfam": "IPv4", 00:11:22.121 "traddr": "10.0.0.1", 00:11:22.121 "trsvcid": "57826" 00:11:22.121 }, 00:11:22.121 "auth": { 00:11:22.121 "state": "completed", 00:11:22.121 "digest": "sha384", 00:11:22.121 "dhgroup": "ffdhe2048" 00:11:22.121 } 00:11:22.121 } 00:11:22.121 ]' 00:11:22.121 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:22.121 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:22.121 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:22.121 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:22.121 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:22.379 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.379 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.379 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.638 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:11:22.638 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:11:23.205 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.205 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:23.205 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.205 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.205 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.205 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:23.205 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:23.205 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:23.464 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:11:23.464 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:23.464 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:23.464 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:23.464 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:23.464 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.464 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.464 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.464 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.464 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.464 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.464 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.464 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.722 00:11:23.979 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.979 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.979 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.237 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.237 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.237 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.237 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.237 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.237 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:24.237 { 00:11:24.237 "cntlid": 59, 00:11:24.237 "qid": 0, 00:11:24.237 "state": "enabled", 00:11:24.237 "thread": "nvmf_tgt_poll_group_000", 00:11:24.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:24.237 "listen_address": { 00:11:24.237 "trtype": "TCP", 00:11:24.237 "adrfam": "IPv4", 00:11:24.238 "traddr": "10.0.0.3", 00:11:24.238 "trsvcid": "4420" 00:11:24.238 }, 00:11:24.238 "peer_address": { 00:11:24.238 "trtype": "TCP", 00:11:24.238 "adrfam": "IPv4", 00:11:24.238 "traddr": "10.0.0.1", 00:11:24.238 "trsvcid": "47326" 00:11:24.238 }, 00:11:24.238 "auth": { 00:11:24.238 "state": "completed", 00:11:24.238 "digest": "sha384", 00:11:24.238 "dhgroup": "ffdhe2048" 00:11:24.238 } 00:11:24.238 } 00:11:24.238 ]' 00:11:24.238 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:24.238 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:24.238 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:24.238 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:24.238 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:24.238 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.238 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.238 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.804 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:11:24.804 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:11:25.370 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.370 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:25.370 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.370 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.370 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.370 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:25.370 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:25.370 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:25.628 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:11:25.628 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.628 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:25.628 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:25.628 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:25.628 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.628 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.628 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.628 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.628 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.628 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.628 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.628 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.887 00:11:25.887 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:25.887 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.887 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:26.145 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.145 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.145 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.145 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.145 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.145 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:26.145 { 00:11:26.145 "cntlid": 61, 00:11:26.145 "qid": 0, 00:11:26.145 "state": "enabled", 00:11:26.145 "thread": "nvmf_tgt_poll_group_000", 00:11:26.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:26.145 "listen_address": { 00:11:26.145 "trtype": "TCP", 00:11:26.145 "adrfam": "IPv4", 00:11:26.145 "traddr": "10.0.0.3", 00:11:26.145 "trsvcid": "4420" 00:11:26.145 }, 00:11:26.145 "peer_address": { 00:11:26.145 "trtype": "TCP", 00:11:26.145 "adrfam": "IPv4", 00:11:26.145 "traddr": "10.0.0.1", 00:11:26.145 "trsvcid": "47360" 00:11:26.145 }, 00:11:26.145 "auth": { 00:11:26.145 "state": "completed", 00:11:26.145 "digest": "sha384", 00:11:26.145 "dhgroup": "ffdhe2048" 00:11:26.145 } 00:11:26.145 } 00:11:26.145 ]' 00:11:26.145 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:26.403 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:26.403 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:26.403 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:26.403 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:26.403 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.403 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.403 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.661 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:11:26.661 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:11:27.595 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.596 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:27.596 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.596 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.596 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.596 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:27.596 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:27.596 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:27.854 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:11:27.854 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:27.854 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:27.854 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:27.854 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:27.854 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.854 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key3 00:11:27.854 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.854 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.854 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.854 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:27.854 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:27.854 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:28.113 00:11:28.113 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:28.113 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.113 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.371 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.371 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.371 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.371 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.371 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.371 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:28.371 { 00:11:28.371 "cntlid": 63, 00:11:28.371 "qid": 0, 00:11:28.371 "state": "enabled", 00:11:28.371 "thread": "nvmf_tgt_poll_group_000", 00:11:28.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:28.371 "listen_address": { 00:11:28.371 "trtype": "TCP", 00:11:28.371 "adrfam": "IPv4", 00:11:28.371 "traddr": "10.0.0.3", 00:11:28.371 "trsvcid": "4420" 00:11:28.371 }, 00:11:28.371 "peer_address": { 00:11:28.371 "trtype": "TCP", 00:11:28.371 "adrfam": "IPv4", 00:11:28.371 "traddr": "10.0.0.1", 00:11:28.371 "trsvcid": "47376" 00:11:28.371 }, 00:11:28.371 "auth": { 00:11:28.371 "state": "completed", 00:11:28.371 "digest": "sha384", 00:11:28.371 "dhgroup": "ffdhe2048" 00:11:28.371 } 00:11:28.371 } 00:11:28.371 ]' 00:11:28.371 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:28.371 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:28.371 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:28.371 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:28.371 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:28.629 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.629 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.629 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.887 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:11:28.887 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:11:29.453 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.453 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:29.453 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.453 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.453 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.453 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:29.453 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:29.453 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:29.453 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:29.711 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:11:29.711 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:29.711 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:29.711 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:29.711 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:29.711 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.711 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.711 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.711 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.711 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.711 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.711 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.711 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.277 00:11:30.277 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:30.277 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:30.277 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.535 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.535 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.535 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.535 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.535 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.535 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:30.535 { 00:11:30.535 "cntlid": 65, 00:11:30.535 "qid": 0, 00:11:30.535 "state": "enabled", 00:11:30.535 "thread": "nvmf_tgt_poll_group_000", 00:11:30.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:30.535 "listen_address": { 00:11:30.535 "trtype": "TCP", 00:11:30.535 "adrfam": "IPv4", 00:11:30.535 "traddr": "10.0.0.3", 00:11:30.535 "trsvcid": "4420" 00:11:30.535 }, 00:11:30.535 "peer_address": { 00:11:30.535 "trtype": "TCP", 00:11:30.535 "adrfam": "IPv4", 00:11:30.535 "traddr": "10.0.0.1", 00:11:30.535 "trsvcid": "47414" 00:11:30.535 }, 00:11:30.535 "auth": { 00:11:30.535 "state": "completed", 00:11:30.535 "digest": "sha384", 00:11:30.535 "dhgroup": "ffdhe3072" 00:11:30.535 } 00:11:30.535 } 00:11:30.535 ]' 00:11:30.535 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:30.535 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:30.535 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:30.535 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:30.535 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:30.536 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.536 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.536 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.794 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:11:30.794 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:11:31.727 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.727 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:31.727 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.727 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.727 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.727 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:31.727 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:31.727 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:31.986 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:11:31.986 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:31.986 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:31.986 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:31.986 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:31.986 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.986 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.986 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.986 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.986 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.986 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.986 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.986 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.244 00:11:32.244 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:32.244 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.244 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.501 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.501 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.501 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.501 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.501 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.501 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:32.501 { 00:11:32.501 "cntlid": 67, 00:11:32.501 "qid": 0, 00:11:32.501 "state": "enabled", 00:11:32.501 "thread": "nvmf_tgt_poll_group_000", 00:11:32.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:32.501 "listen_address": { 00:11:32.501 "trtype": "TCP", 00:11:32.501 "adrfam": "IPv4", 00:11:32.501 "traddr": "10.0.0.3", 00:11:32.501 "trsvcid": "4420" 00:11:32.501 }, 00:11:32.501 "peer_address": { 00:11:32.501 "trtype": "TCP", 00:11:32.501 "adrfam": "IPv4", 00:11:32.501 "traddr": "10.0.0.1", 00:11:32.501 "trsvcid": "47442" 00:11:32.501 }, 00:11:32.501 "auth": { 00:11:32.501 "state": "completed", 00:11:32.501 "digest": "sha384", 00:11:32.501 "dhgroup": "ffdhe3072" 00:11:32.501 } 00:11:32.501 } 00:11:32.501 ]' 00:11:32.501 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:32.501 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:32.501 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:32.501 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:32.501 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:32.836 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.836 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.836 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.108 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:11:33.108 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:11:33.675 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.675 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:33.675 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.675 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.675 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.675 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:33.675 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:33.675 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:33.934 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:11:33.934 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:33.934 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:33.934 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:33.934 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:33.934 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.934 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.934 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.934 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.934 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.934 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.934 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.934 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.192 00:11:34.192 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:34.192 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:34.192 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.451 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.451 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.451 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.451 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.451 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.451 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:34.451 { 00:11:34.451 "cntlid": 69, 00:11:34.451 "qid": 0, 00:11:34.451 "state": "enabled", 00:11:34.451 "thread": "nvmf_tgt_poll_group_000", 00:11:34.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:34.451 "listen_address": { 00:11:34.451 "trtype": "TCP", 00:11:34.451 "adrfam": "IPv4", 00:11:34.451 "traddr": "10.0.0.3", 00:11:34.451 "trsvcid": "4420" 00:11:34.451 }, 00:11:34.451 "peer_address": { 00:11:34.451 "trtype": "TCP", 00:11:34.451 "adrfam": "IPv4", 00:11:34.451 "traddr": "10.0.0.1", 00:11:34.451 "trsvcid": "43324" 00:11:34.451 }, 00:11:34.451 "auth": { 00:11:34.451 "state": "completed", 00:11:34.451 "digest": "sha384", 00:11:34.451 "dhgroup": "ffdhe3072" 00:11:34.451 } 00:11:34.451 } 00:11:34.451 ]' 00:11:34.451 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:34.710 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:34.710 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:34.710 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:34.710 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:34.710 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.710 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.710 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.969 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:11:34.969 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:11:35.536 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.536 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:35.536 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.536 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.536 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.536 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.536 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:35.536 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:35.794 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:11:35.794 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.794 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:35.794 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:35.794 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:35.794 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.794 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key3 00:11:35.794 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.794 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.794 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.794 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:35.794 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:35.794 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:36.052 00:11:36.311 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:36.311 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:36.311 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.311 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.311 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.311 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.311 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.311 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.311 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:36.311 { 00:11:36.311 "cntlid": 71, 00:11:36.311 "qid": 0, 00:11:36.311 "state": "enabled", 00:11:36.311 "thread": "nvmf_tgt_poll_group_000", 00:11:36.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:36.311 "listen_address": { 00:11:36.311 "trtype": "TCP", 00:11:36.311 "adrfam": "IPv4", 00:11:36.311 "traddr": "10.0.0.3", 00:11:36.311 "trsvcid": "4420" 00:11:36.311 }, 00:11:36.311 "peer_address": { 00:11:36.311 "trtype": "TCP", 00:11:36.311 "adrfam": "IPv4", 00:11:36.311 "traddr": "10.0.0.1", 00:11:36.311 "trsvcid": "43368" 00:11:36.311 }, 00:11:36.311 "auth": { 00:11:36.311 "state": "completed", 00:11:36.311 "digest": "sha384", 00:11:36.311 "dhgroup": "ffdhe3072" 00:11:36.311 } 00:11:36.311 } 00:11:36.311 ]' 00:11:36.311 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:36.570 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:36.570 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:36.570 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:36.570 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:36.570 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.570 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.570 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.829 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:11:36.829 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:11:37.396 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.654 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:37.654 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.654 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.654 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.654 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:37.654 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.654 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:37.654 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:37.913 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:11:37.913 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.913 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:37.913 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:37.913 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:37.913 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.913 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.913 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.913 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.913 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.913 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.913 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.913 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.172 00:11:38.172 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.172 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.172 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.430 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.430 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.430 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.430 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.430 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.430 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.430 { 00:11:38.430 "cntlid": 73, 00:11:38.430 "qid": 0, 00:11:38.430 "state": "enabled", 00:11:38.430 "thread": "nvmf_tgt_poll_group_000", 00:11:38.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:38.430 "listen_address": { 00:11:38.430 "trtype": "TCP", 00:11:38.430 "adrfam": "IPv4", 00:11:38.430 "traddr": "10.0.0.3", 00:11:38.430 "trsvcid": "4420" 00:11:38.430 }, 00:11:38.430 "peer_address": { 00:11:38.430 "trtype": "TCP", 00:11:38.430 "adrfam": "IPv4", 00:11:38.430 "traddr": "10.0.0.1", 00:11:38.430 "trsvcid": "43410" 00:11:38.430 }, 00:11:38.430 "auth": { 00:11:38.430 "state": "completed", 00:11:38.430 "digest": "sha384", 00:11:38.430 "dhgroup": "ffdhe4096" 00:11:38.431 } 00:11:38.431 } 00:11:38.431 ]' 00:11:38.431 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.689 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:38.689 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.689 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:38.689 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.689 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.689 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.689 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.948 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:11:38.948 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:11:39.884 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.884 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:39.884 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.884 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.884 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.884 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.884 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:39.884 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:39.884 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:11:39.884 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.884 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:39.884 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:39.884 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:39.884 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.884 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.884 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.884 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.884 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.884 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.884 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.884 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.143 00:11:40.402 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:40.402 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.402 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.660 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.660 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.660 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.660 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.660 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.660 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.660 { 00:11:40.660 "cntlid": 75, 00:11:40.660 "qid": 0, 00:11:40.660 "state": "enabled", 00:11:40.660 "thread": "nvmf_tgt_poll_group_000", 00:11:40.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:40.660 "listen_address": { 00:11:40.660 "trtype": "TCP", 00:11:40.660 "adrfam": "IPv4", 00:11:40.660 "traddr": "10.0.0.3", 00:11:40.660 "trsvcid": "4420" 00:11:40.660 }, 00:11:40.660 "peer_address": { 00:11:40.660 "trtype": "TCP", 00:11:40.660 "adrfam": "IPv4", 00:11:40.660 "traddr": "10.0.0.1", 00:11:40.660 "trsvcid": "43430" 00:11:40.660 }, 00:11:40.660 "auth": { 00:11:40.660 "state": "completed", 00:11:40.660 "digest": "sha384", 00:11:40.660 "dhgroup": "ffdhe4096" 00:11:40.660 } 00:11:40.660 } 00:11:40.660 ]' 00:11:40.660 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.660 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:40.660 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.660 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:40.660 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.660 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.661 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.661 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.919 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:11:40.919 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:11:41.855 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.855 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:41.855 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.855 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.855 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.855 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.855 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:41.855 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:41.855 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:11:41.855 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.855 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:41.855 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:41.855 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:41.855 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.855 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.855 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.855 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.855 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.855 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.856 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.856 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.421 00:11:42.421 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.421 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.421 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.679 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.679 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.679 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.679 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.679 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.679 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.679 { 00:11:42.679 "cntlid": 77, 00:11:42.679 "qid": 0, 00:11:42.679 "state": "enabled", 00:11:42.679 "thread": "nvmf_tgt_poll_group_000", 00:11:42.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:42.679 "listen_address": { 00:11:42.679 "trtype": "TCP", 00:11:42.679 "adrfam": "IPv4", 00:11:42.679 "traddr": "10.0.0.3", 00:11:42.679 "trsvcid": "4420" 00:11:42.679 }, 00:11:42.679 "peer_address": { 00:11:42.679 "trtype": "TCP", 00:11:42.679 "adrfam": "IPv4", 00:11:42.679 "traddr": "10.0.0.1", 00:11:42.679 "trsvcid": "43456" 00:11:42.679 }, 00:11:42.680 "auth": { 00:11:42.680 "state": "completed", 00:11:42.680 "digest": "sha384", 00:11:42.680 "dhgroup": "ffdhe4096" 00:11:42.680 } 00:11:42.680 } 00:11:42.680 ]' 00:11:42.680 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.680 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:42.680 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.680 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:42.680 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.680 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.680 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.680 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.938 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:11:42.938 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:11:43.873 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.873 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:43.873 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.873 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.873 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.873 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.873 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:43.873 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:43.873 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:11:43.873 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.873 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:43.873 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:43.873 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:43.873 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.873 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key3 00:11:43.873 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.873 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.873 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.873 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:43.873 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:43.873 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:44.440 00:11:44.440 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.440 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.440 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.698 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.698 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.698 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.698 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.698 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.698 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.698 { 00:11:44.698 "cntlid": 79, 00:11:44.698 "qid": 0, 00:11:44.698 "state": "enabled", 00:11:44.698 "thread": "nvmf_tgt_poll_group_000", 00:11:44.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:44.698 "listen_address": { 00:11:44.698 "trtype": "TCP", 00:11:44.698 "adrfam": "IPv4", 00:11:44.698 "traddr": "10.0.0.3", 00:11:44.698 "trsvcid": "4420" 00:11:44.698 }, 00:11:44.698 "peer_address": { 00:11:44.698 "trtype": "TCP", 00:11:44.698 "adrfam": "IPv4", 00:11:44.698 "traddr": "10.0.0.1", 00:11:44.698 "trsvcid": "37718" 00:11:44.698 }, 00:11:44.698 "auth": { 00:11:44.698 "state": "completed", 00:11:44.698 "digest": "sha384", 00:11:44.698 "dhgroup": "ffdhe4096" 00:11:44.698 } 00:11:44.698 } 00:11:44.698 ]' 00:11:44.698 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.698 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:44.698 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.698 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:44.698 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.957 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.957 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.957 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.215 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:11:45.215 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:11:45.781 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.781 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:45.781 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.781 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.782 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.782 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:45.782 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.782 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:45.782 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:46.350 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:11:46.350 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:46.350 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:46.350 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:46.350 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:46.350 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.350 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.350 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.350 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.350 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.350 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.350 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.350 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.609 00:11:46.609 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.609 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.609 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.868 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.868 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.868 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.868 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.868 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.868 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.868 { 00:11:46.868 "cntlid": 81, 00:11:46.868 "qid": 0, 00:11:46.868 "state": "enabled", 00:11:46.868 "thread": "nvmf_tgt_poll_group_000", 00:11:46.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:46.868 "listen_address": { 00:11:46.868 "trtype": "TCP", 00:11:46.868 "adrfam": "IPv4", 00:11:46.868 "traddr": "10.0.0.3", 00:11:46.868 "trsvcid": "4420" 00:11:46.868 }, 00:11:46.868 "peer_address": { 00:11:46.868 "trtype": "TCP", 00:11:46.868 "adrfam": "IPv4", 00:11:46.868 "traddr": "10.0.0.1", 00:11:46.868 "trsvcid": "37746" 00:11:46.868 }, 00:11:46.868 "auth": { 00:11:46.868 "state": "completed", 00:11:46.868 "digest": "sha384", 00:11:46.868 "dhgroup": "ffdhe6144" 00:11:46.868 } 00:11:46.868 } 00:11:46.868 ]' 00:11:46.868 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:47.126 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:47.126 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:47.126 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:47.126 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:47.126 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.126 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.126 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.385 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:11:47.385 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:11:48.321 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.321 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:48.321 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.321 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.321 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.321 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:48.321 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:48.321 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:48.321 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:11:48.321 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:48.321 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:48.321 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:48.321 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:48.321 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.321 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.321 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.321 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.321 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.321 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.321 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.321 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.896 00:11:48.896 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.896 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.896 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.154 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.154 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.154 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.154 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.154 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.154 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:49.154 { 00:11:49.154 "cntlid": 83, 00:11:49.154 "qid": 0, 00:11:49.154 "state": "enabled", 00:11:49.154 "thread": "nvmf_tgt_poll_group_000", 00:11:49.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:49.154 "listen_address": { 00:11:49.154 "trtype": "TCP", 00:11:49.154 "adrfam": "IPv4", 00:11:49.154 "traddr": "10.0.0.3", 00:11:49.154 "trsvcid": "4420" 00:11:49.154 }, 00:11:49.154 "peer_address": { 00:11:49.154 "trtype": "TCP", 00:11:49.154 "adrfam": "IPv4", 00:11:49.154 "traddr": "10.0.0.1", 00:11:49.154 "trsvcid": "37770" 00:11:49.154 }, 00:11:49.154 "auth": { 00:11:49.154 "state": "completed", 00:11:49.154 "digest": "sha384", 00:11:49.154 "dhgroup": "ffdhe6144" 00:11:49.154 } 00:11:49.154 } 00:11:49.154 ]' 00:11:49.154 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:49.154 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:49.154 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:49.413 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:49.413 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:49.413 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.413 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.413 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.672 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:11:49.672 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:11:50.240 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.240 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:50.240 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.240 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.240 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.240 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:50.240 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:50.240 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:50.499 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:11:50.499 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.499 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:50.499 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:50.499 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:50.499 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.499 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.499 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.499 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.499 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.499 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.499 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.499 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.065 00:11:51.065 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:51.065 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:51.065 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.324 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.324 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.324 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.324 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.324 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.324 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:51.324 { 00:11:51.324 "cntlid": 85, 00:11:51.324 "qid": 0, 00:11:51.324 "state": "enabled", 00:11:51.324 "thread": "nvmf_tgt_poll_group_000", 00:11:51.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:51.324 "listen_address": { 00:11:51.324 "trtype": "TCP", 00:11:51.324 "adrfam": "IPv4", 00:11:51.324 "traddr": "10.0.0.3", 00:11:51.324 "trsvcid": "4420" 00:11:51.324 }, 00:11:51.324 "peer_address": { 00:11:51.324 "trtype": "TCP", 00:11:51.324 "adrfam": "IPv4", 00:11:51.324 "traddr": "10.0.0.1", 00:11:51.324 "trsvcid": "37792" 00:11:51.324 }, 00:11:51.324 "auth": { 00:11:51.324 "state": "completed", 00:11:51.324 "digest": "sha384", 00:11:51.324 "dhgroup": "ffdhe6144" 00:11:51.324 } 00:11:51.324 } 00:11:51.324 ]' 00:11:51.324 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:51.324 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:51.324 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:51.324 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:51.584 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:51.584 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.584 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.584 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.843 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:11:51.843 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:11:52.410 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.668 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:52.668 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.668 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.668 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.668 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.668 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:52.668 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:52.927 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:11:52.927 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.927 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:52.927 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:52.927 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:52.927 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.927 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key3 00:11:52.927 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.927 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.927 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.927 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:52.927 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:52.927 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:53.494 00:11:53.494 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:53.494 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:53.494 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.752 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.752 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.752 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.752 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.752 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.752 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:53.752 { 00:11:53.752 "cntlid": 87, 00:11:53.752 "qid": 0, 00:11:53.752 "state": "enabled", 00:11:53.752 "thread": "nvmf_tgt_poll_group_000", 00:11:53.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:53.752 "listen_address": { 00:11:53.752 "trtype": "TCP", 00:11:53.752 "adrfam": "IPv4", 00:11:53.752 "traddr": "10.0.0.3", 00:11:53.752 "trsvcid": "4420" 00:11:53.752 }, 00:11:53.752 "peer_address": { 00:11:53.752 "trtype": "TCP", 00:11:53.752 "adrfam": "IPv4", 00:11:53.752 "traddr": "10.0.0.1", 00:11:53.752 "trsvcid": "36306" 00:11:53.752 }, 00:11:53.752 "auth": { 00:11:53.752 "state": "completed", 00:11:53.752 "digest": "sha384", 00:11:53.752 "dhgroup": "ffdhe6144" 00:11:53.752 } 00:11:53.752 } 00:11:53.752 ]' 00:11:53.752 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:53.752 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:53.752 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.011 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:54.011 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:54.011 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.011 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.011 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.269 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:11:54.269 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:11:54.835 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.835 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:54.835 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.835 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.835 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.835 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:54.835 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.835 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:54.835 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:55.094 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:11:55.094 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:55.094 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:55.094 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:55.094 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:55.094 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.094 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.094 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.094 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.094 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.094 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.094 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.094 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.661 00:11:55.661 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:55.661 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:55.661 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.936 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.936 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.936 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.936 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.206 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.206 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:56.206 { 00:11:56.206 "cntlid": 89, 00:11:56.206 "qid": 0, 00:11:56.206 "state": "enabled", 00:11:56.206 "thread": "nvmf_tgt_poll_group_000", 00:11:56.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:56.206 "listen_address": { 00:11:56.206 "trtype": "TCP", 00:11:56.206 "adrfam": "IPv4", 00:11:56.206 "traddr": "10.0.0.3", 00:11:56.206 "trsvcid": "4420" 00:11:56.206 }, 00:11:56.206 "peer_address": { 00:11:56.206 "trtype": "TCP", 00:11:56.206 "adrfam": "IPv4", 00:11:56.206 "traddr": "10.0.0.1", 00:11:56.206 "trsvcid": "36324" 00:11:56.206 }, 00:11:56.206 "auth": { 00:11:56.206 "state": "completed", 00:11:56.206 "digest": "sha384", 00:11:56.206 "dhgroup": "ffdhe8192" 00:11:56.206 } 00:11:56.206 } 00:11:56.206 ]' 00:11:56.206 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:56.206 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:56.206 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:56.206 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:56.206 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:56.206 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.206 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.206 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.464 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:11:56.464 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:11:57.030 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.287 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:57.287 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.287 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.287 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.287 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:57.287 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:57.287 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:57.287 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:11:57.287 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:57.287 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:57.287 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:57.287 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:57.287 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.287 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.287 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.287 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.545 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.545 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.545 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.545 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.113 00:11:58.113 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:58.113 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:58.113 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.372 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.372 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.372 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.372 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.372 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.372 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:58.372 { 00:11:58.372 "cntlid": 91, 00:11:58.372 "qid": 0, 00:11:58.372 "state": "enabled", 00:11:58.372 "thread": "nvmf_tgt_poll_group_000", 00:11:58.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:11:58.372 "listen_address": { 00:11:58.372 "trtype": "TCP", 00:11:58.372 "adrfam": "IPv4", 00:11:58.372 "traddr": "10.0.0.3", 00:11:58.372 "trsvcid": "4420" 00:11:58.372 }, 00:11:58.372 "peer_address": { 00:11:58.372 "trtype": "TCP", 00:11:58.372 "adrfam": "IPv4", 00:11:58.372 "traddr": "10.0.0.1", 00:11:58.372 "trsvcid": "36350" 00:11:58.372 }, 00:11:58.372 "auth": { 00:11:58.372 "state": "completed", 00:11:58.372 "digest": "sha384", 00:11:58.372 "dhgroup": "ffdhe8192" 00:11:58.372 } 00:11:58.372 } 00:11:58.372 ]' 00:11:58.372 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:58.372 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:58.372 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:58.631 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:58.631 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:58.631 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.631 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.631 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.888 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:11:58.888 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:11:59.822 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.822 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:11:59.822 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.822 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.822 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.822 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.822 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:59.822 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:59.822 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:11:59.822 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:59.822 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:59.822 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:59.822 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:59.822 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.822 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.822 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.822 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.822 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.822 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.822 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.822 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.757 00:12:00.757 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:00.757 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.757 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.757 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.757 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.757 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.757 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.757 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.757 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.757 { 00:12:00.757 "cntlid": 93, 00:12:00.757 "qid": 0, 00:12:00.757 "state": "enabled", 00:12:00.757 "thread": "nvmf_tgt_poll_group_000", 00:12:00.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:00.757 "listen_address": { 00:12:00.757 "trtype": "TCP", 00:12:00.757 "adrfam": "IPv4", 00:12:00.757 "traddr": "10.0.0.3", 00:12:00.757 "trsvcid": "4420" 00:12:00.757 }, 00:12:00.757 "peer_address": { 00:12:00.757 "trtype": "TCP", 00:12:00.757 "adrfam": "IPv4", 00:12:00.757 "traddr": "10.0.0.1", 00:12:00.757 "trsvcid": "36374" 00:12:00.757 }, 00:12:00.757 "auth": { 00:12:00.757 "state": "completed", 00:12:00.757 "digest": "sha384", 00:12:00.757 "dhgroup": "ffdhe8192" 00:12:00.757 } 00:12:00.757 } 00:12:00.757 ]' 00:12:00.757 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:01.051 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:01.051 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:01.051 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:01.051 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:01.051 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.051 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.051 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.335 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:12:01.335 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:12:01.902 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.902 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:01.902 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.902 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.902 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.902 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.902 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:01.902 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:02.160 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:12:02.160 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.160 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:02.160 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:02.160 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:02.160 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.160 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key3 00:12:02.160 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.160 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.160 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.160 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:02.160 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:02.160 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:02.725 00:12:02.984 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.984 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.984 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.243 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.243 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.243 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.243 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.243 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.243 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.243 { 00:12:03.243 "cntlid": 95, 00:12:03.243 "qid": 0, 00:12:03.243 "state": "enabled", 00:12:03.243 "thread": "nvmf_tgt_poll_group_000", 00:12:03.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:03.243 "listen_address": { 00:12:03.243 "trtype": "TCP", 00:12:03.243 "adrfam": "IPv4", 00:12:03.243 "traddr": "10.0.0.3", 00:12:03.243 "trsvcid": "4420" 00:12:03.243 }, 00:12:03.243 "peer_address": { 00:12:03.243 "trtype": "TCP", 00:12:03.243 "adrfam": "IPv4", 00:12:03.243 "traddr": "10.0.0.1", 00:12:03.243 "trsvcid": "36402" 00:12:03.243 }, 00:12:03.243 "auth": { 00:12:03.243 "state": "completed", 00:12:03.243 "digest": "sha384", 00:12:03.243 "dhgroup": "ffdhe8192" 00:12:03.243 } 00:12:03.243 } 00:12:03.243 ]' 00:12:03.243 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.243 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:03.243 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:03.243 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:03.243 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:03.243 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.243 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.243 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.502 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:12:03.502 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.437 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:05.004 00:12:05.004 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:05.004 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:05.004 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.262 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.262 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.262 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.262 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.262 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.262 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:05.262 { 00:12:05.262 "cntlid": 97, 00:12:05.262 "qid": 0, 00:12:05.262 "state": "enabled", 00:12:05.262 "thread": "nvmf_tgt_poll_group_000", 00:12:05.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:05.262 "listen_address": { 00:12:05.262 "trtype": "TCP", 00:12:05.262 "adrfam": "IPv4", 00:12:05.262 "traddr": "10.0.0.3", 00:12:05.262 "trsvcid": "4420" 00:12:05.262 }, 00:12:05.262 "peer_address": { 00:12:05.262 "trtype": "TCP", 00:12:05.262 "adrfam": "IPv4", 00:12:05.262 "traddr": "10.0.0.1", 00:12:05.262 "trsvcid": "49280" 00:12:05.262 }, 00:12:05.262 "auth": { 00:12:05.262 "state": "completed", 00:12:05.262 "digest": "sha512", 00:12:05.262 "dhgroup": "null" 00:12:05.262 } 00:12:05.262 } 00:12:05.262 ]' 00:12:05.262 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.262 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:05.263 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.263 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:05.263 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:05.263 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.263 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.263 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.521 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:12:05.521 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:12:06.456 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.456 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:06.456 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.456 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.456 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.456 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:06.456 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:06.456 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:06.714 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:12:06.714 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:06.715 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:06.715 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:06.715 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:06.715 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.715 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.715 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.715 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.715 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.715 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.715 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.715 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.973 00:12:06.973 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.973 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.973 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:07.231 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.231 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.231 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.231 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.231 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.231 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:07.231 { 00:12:07.231 "cntlid": 99, 00:12:07.231 "qid": 0, 00:12:07.231 "state": "enabled", 00:12:07.231 "thread": "nvmf_tgt_poll_group_000", 00:12:07.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:07.231 "listen_address": { 00:12:07.231 "trtype": "TCP", 00:12:07.231 "adrfam": "IPv4", 00:12:07.231 "traddr": "10.0.0.3", 00:12:07.231 "trsvcid": "4420" 00:12:07.231 }, 00:12:07.231 "peer_address": { 00:12:07.231 "trtype": "TCP", 00:12:07.231 "adrfam": "IPv4", 00:12:07.231 "traddr": "10.0.0.1", 00:12:07.231 "trsvcid": "49292" 00:12:07.231 }, 00:12:07.231 "auth": { 00:12:07.231 "state": "completed", 00:12:07.231 "digest": "sha512", 00:12:07.231 "dhgroup": "null" 00:12:07.231 } 00:12:07.231 } 00:12:07.231 ]' 00:12:07.231 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:07.231 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:07.231 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:07.490 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:07.490 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:07.490 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.490 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.490 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.749 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:12:07.749 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:12:08.315 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.315 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:08.315 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.315 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.315 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.315 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:08.315 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:08.315 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:08.573 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:12:08.573 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:08.573 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:08.573 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:08.573 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:08.573 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.573 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.573 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.573 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.573 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.573 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.573 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.573 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.832 00:12:09.092 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:09.092 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.092 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:09.350 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.350 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.350 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.350 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.350 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.350 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:09.350 { 00:12:09.350 "cntlid": 101, 00:12:09.350 "qid": 0, 00:12:09.350 "state": "enabled", 00:12:09.350 "thread": "nvmf_tgt_poll_group_000", 00:12:09.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:09.350 "listen_address": { 00:12:09.350 "trtype": "TCP", 00:12:09.350 "adrfam": "IPv4", 00:12:09.350 "traddr": "10.0.0.3", 00:12:09.350 "trsvcid": "4420" 00:12:09.350 }, 00:12:09.350 "peer_address": { 00:12:09.350 "trtype": "TCP", 00:12:09.350 "adrfam": "IPv4", 00:12:09.350 "traddr": "10.0.0.1", 00:12:09.350 "trsvcid": "49328" 00:12:09.350 }, 00:12:09.350 "auth": { 00:12:09.350 "state": "completed", 00:12:09.350 "digest": "sha512", 00:12:09.350 "dhgroup": "null" 00:12:09.350 } 00:12:09.350 } 00:12:09.350 ]' 00:12:09.350 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:09.350 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:09.350 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:09.350 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:09.350 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:09.350 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.350 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.350 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.609 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:12:09.609 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:12:10.175 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.175 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:10.175 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.175 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.175 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.175 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:10.175 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:10.175 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:10.434 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:12:10.434 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:10.434 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:10.434 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:10.434 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:10.434 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.434 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key3 00:12:10.434 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.434 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.434 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.434 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:10.434 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:10.434 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:11.000 00:12:11.000 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:11.000 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:11.000 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.000 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.000 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.000 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.000 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.000 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.000 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:11.000 { 00:12:11.000 "cntlid": 103, 00:12:11.000 "qid": 0, 00:12:11.000 "state": "enabled", 00:12:11.000 "thread": "nvmf_tgt_poll_group_000", 00:12:11.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:11.000 "listen_address": { 00:12:11.000 "trtype": "TCP", 00:12:11.000 "adrfam": "IPv4", 00:12:11.000 "traddr": "10.0.0.3", 00:12:11.000 "trsvcid": "4420" 00:12:11.000 }, 00:12:11.000 "peer_address": { 00:12:11.000 "trtype": "TCP", 00:12:11.000 "adrfam": "IPv4", 00:12:11.000 "traddr": "10.0.0.1", 00:12:11.000 "trsvcid": "49362" 00:12:11.000 }, 00:12:11.000 "auth": { 00:12:11.000 "state": "completed", 00:12:11.000 "digest": "sha512", 00:12:11.000 "dhgroup": "null" 00:12:11.000 } 00:12:11.000 } 00:12:11.000 ]' 00:12:11.000 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:11.259 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:11.259 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:11.259 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:11.259 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:11.259 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.259 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.259 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.518 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:12:11.518 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:12:12.166 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.166 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:12.166 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.166 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.166 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.166 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:12.166 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:12.166 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:12.166 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:12.427 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:12:12.427 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:12.427 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:12.427 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:12.427 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:12.427 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.427 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.427 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.427 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.427 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.427 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.427 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.427 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.994 00:12:12.994 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.994 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.994 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:13.251 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.251 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.251 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.251 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.251 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.251 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:13.251 { 00:12:13.251 "cntlid": 105, 00:12:13.251 "qid": 0, 00:12:13.251 "state": "enabled", 00:12:13.251 "thread": "nvmf_tgt_poll_group_000", 00:12:13.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:13.251 "listen_address": { 00:12:13.251 "trtype": "TCP", 00:12:13.251 "adrfam": "IPv4", 00:12:13.251 "traddr": "10.0.0.3", 00:12:13.251 "trsvcid": "4420" 00:12:13.251 }, 00:12:13.251 "peer_address": { 00:12:13.251 "trtype": "TCP", 00:12:13.251 "adrfam": "IPv4", 00:12:13.251 "traddr": "10.0.0.1", 00:12:13.252 "trsvcid": "37076" 00:12:13.252 }, 00:12:13.252 "auth": { 00:12:13.252 "state": "completed", 00:12:13.252 "digest": "sha512", 00:12:13.252 "dhgroup": "ffdhe2048" 00:12:13.252 } 00:12:13.252 } 00:12:13.252 ]' 00:12:13.252 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:13.252 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:13.252 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:13.252 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:13.252 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:13.252 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.252 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.252 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.817 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:12:13.817 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:12:14.382 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.382 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:14.382 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.382 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.382 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.382 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:14.382 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:14.382 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:14.640 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:12:14.640 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:14.640 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:14.640 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:14.640 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:14.640 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.640 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.640 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.640 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.640 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.640 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.640 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.640 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.897 00:12:14.897 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.897 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.897 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.155 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.155 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.155 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.155 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.155 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.155 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:15.155 { 00:12:15.155 "cntlid": 107, 00:12:15.155 "qid": 0, 00:12:15.155 "state": "enabled", 00:12:15.155 "thread": "nvmf_tgt_poll_group_000", 00:12:15.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:15.155 "listen_address": { 00:12:15.155 "trtype": "TCP", 00:12:15.155 "adrfam": "IPv4", 00:12:15.155 "traddr": "10.0.0.3", 00:12:15.155 "trsvcid": "4420" 00:12:15.155 }, 00:12:15.155 "peer_address": { 00:12:15.155 "trtype": "TCP", 00:12:15.155 "adrfam": "IPv4", 00:12:15.155 "traddr": "10.0.0.1", 00:12:15.155 "trsvcid": "37108" 00:12:15.155 }, 00:12:15.155 "auth": { 00:12:15.155 "state": "completed", 00:12:15.155 "digest": "sha512", 00:12:15.155 "dhgroup": "ffdhe2048" 00:12:15.155 } 00:12:15.155 } 00:12:15.155 ]' 00:12:15.155 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:15.413 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:15.413 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:15.413 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:15.413 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:15.413 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.413 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.413 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.671 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:12:15.671 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:12:16.236 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.236 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:16.236 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.236 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.236 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.236 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:16.503 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:16.503 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:16.761 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:12:16.761 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:16.761 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:16.761 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:16.762 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:16.762 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.762 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.762 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.762 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.762 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.762 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.762 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.762 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.068 00:12:17.068 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:17.068 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:17.068 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.326 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.326 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.326 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.326 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.326 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.326 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:17.326 { 00:12:17.326 "cntlid": 109, 00:12:17.326 "qid": 0, 00:12:17.326 "state": "enabled", 00:12:17.326 "thread": "nvmf_tgt_poll_group_000", 00:12:17.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:17.326 "listen_address": { 00:12:17.326 "trtype": "TCP", 00:12:17.326 "adrfam": "IPv4", 00:12:17.326 "traddr": "10.0.0.3", 00:12:17.326 "trsvcid": "4420" 00:12:17.326 }, 00:12:17.326 "peer_address": { 00:12:17.326 "trtype": "TCP", 00:12:17.326 "adrfam": "IPv4", 00:12:17.326 "traddr": "10.0.0.1", 00:12:17.326 "trsvcid": "37144" 00:12:17.326 }, 00:12:17.326 "auth": { 00:12:17.326 "state": "completed", 00:12:17.326 "digest": "sha512", 00:12:17.326 "dhgroup": "ffdhe2048" 00:12:17.326 } 00:12:17.326 } 00:12:17.326 ]' 00:12:17.326 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:17.326 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:17.326 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:17.326 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:17.326 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.584 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.584 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.584 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.842 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:12:17.842 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:12:18.408 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.408 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:18.408 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.408 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.408 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.408 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:18.408 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:18.408 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:18.667 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:12:18.667 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:18.667 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:18.667 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:18.667 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:18.667 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.667 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key3 00:12:18.667 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.667 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.667 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.667 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:18.667 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:18.667 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:18.925 00:12:18.925 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.925 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.925 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:19.184 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.184 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.184 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.184 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.442 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.442 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:19.442 { 00:12:19.442 "cntlid": 111, 00:12:19.442 "qid": 0, 00:12:19.442 "state": "enabled", 00:12:19.442 "thread": "nvmf_tgt_poll_group_000", 00:12:19.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:19.442 "listen_address": { 00:12:19.442 "trtype": "TCP", 00:12:19.442 "adrfam": "IPv4", 00:12:19.442 "traddr": "10.0.0.3", 00:12:19.442 "trsvcid": "4420" 00:12:19.442 }, 00:12:19.442 "peer_address": { 00:12:19.442 "trtype": "TCP", 00:12:19.442 "adrfam": "IPv4", 00:12:19.442 "traddr": "10.0.0.1", 00:12:19.442 "trsvcid": "37176" 00:12:19.442 }, 00:12:19.442 "auth": { 00:12:19.442 "state": "completed", 00:12:19.442 "digest": "sha512", 00:12:19.442 "dhgroup": "ffdhe2048" 00:12:19.442 } 00:12:19.442 } 00:12:19.442 ]' 00:12:19.442 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:19.442 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:19.442 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:19.442 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:19.442 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:19.442 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.442 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.442 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.700 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:12:19.700 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:12:20.636 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.636 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:20.636 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.636 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.636 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.636 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:20.636 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.636 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:20.636 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:20.636 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:12:20.636 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.636 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:20.637 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:20.637 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:20.637 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.637 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.637 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.637 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.637 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.637 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.637 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.637 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.204 00:12:21.204 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.204 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.204 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.462 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.462 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.462 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.462 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.462 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.462 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.462 { 00:12:21.462 "cntlid": 113, 00:12:21.462 "qid": 0, 00:12:21.462 "state": "enabled", 00:12:21.462 "thread": "nvmf_tgt_poll_group_000", 00:12:21.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:21.462 "listen_address": { 00:12:21.462 "trtype": "TCP", 00:12:21.462 "adrfam": "IPv4", 00:12:21.462 "traddr": "10.0.0.3", 00:12:21.462 "trsvcid": "4420" 00:12:21.462 }, 00:12:21.462 "peer_address": { 00:12:21.462 "trtype": "TCP", 00:12:21.462 "adrfam": "IPv4", 00:12:21.462 "traddr": "10.0.0.1", 00:12:21.462 "trsvcid": "37192" 00:12:21.462 }, 00:12:21.462 "auth": { 00:12:21.462 "state": "completed", 00:12:21.462 "digest": "sha512", 00:12:21.462 "dhgroup": "ffdhe3072" 00:12:21.462 } 00:12:21.462 } 00:12:21.462 ]' 00:12:21.462 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.462 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:21.462 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.462 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:21.462 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.462 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.462 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.462 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.048 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:12:22.048 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:12:22.613 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.613 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:22.613 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.613 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.613 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.613 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.613 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:22.613 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:22.871 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:12:22.871 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.871 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:22.871 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:22.871 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:22.871 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.871 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.871 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.871 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.871 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.871 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.871 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.871 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.129 00:12:23.129 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.129 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.129 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.387 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.387 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.387 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.387 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.387 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.387 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.387 { 00:12:23.387 "cntlid": 115, 00:12:23.387 "qid": 0, 00:12:23.387 "state": "enabled", 00:12:23.387 "thread": "nvmf_tgt_poll_group_000", 00:12:23.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:23.387 "listen_address": { 00:12:23.387 "trtype": "TCP", 00:12:23.387 "adrfam": "IPv4", 00:12:23.387 "traddr": "10.0.0.3", 00:12:23.387 "trsvcid": "4420" 00:12:23.387 }, 00:12:23.387 "peer_address": { 00:12:23.387 "trtype": "TCP", 00:12:23.387 "adrfam": "IPv4", 00:12:23.387 "traddr": "10.0.0.1", 00:12:23.387 "trsvcid": "50300" 00:12:23.387 }, 00:12:23.387 "auth": { 00:12:23.387 "state": "completed", 00:12:23.387 "digest": "sha512", 00:12:23.387 "dhgroup": "ffdhe3072" 00:12:23.387 } 00:12:23.387 } 00:12:23.387 ]' 00:12:23.387 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.646 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:23.646 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.646 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:23.646 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.646 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.646 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.646 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.904 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:12:23.904 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:12:24.471 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.471 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:24.471 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.471 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.471 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.471 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:24.471 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:24.471 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:24.729 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:12:24.729 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:24.729 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:24.729 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:24.729 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:24.729 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.729 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.729 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.729 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.729 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.729 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.729 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.729 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.296 00:12:25.296 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.296 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.296 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.554 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.554 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.554 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.554 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.554 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.554 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:25.554 { 00:12:25.554 "cntlid": 117, 00:12:25.554 "qid": 0, 00:12:25.554 "state": "enabled", 00:12:25.554 "thread": "nvmf_tgt_poll_group_000", 00:12:25.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:25.554 "listen_address": { 00:12:25.554 "trtype": "TCP", 00:12:25.554 "adrfam": "IPv4", 00:12:25.554 "traddr": "10.0.0.3", 00:12:25.554 "trsvcid": "4420" 00:12:25.554 }, 00:12:25.554 "peer_address": { 00:12:25.554 "trtype": "TCP", 00:12:25.554 "adrfam": "IPv4", 00:12:25.554 "traddr": "10.0.0.1", 00:12:25.554 "trsvcid": "50324" 00:12:25.554 }, 00:12:25.554 "auth": { 00:12:25.554 "state": "completed", 00:12:25.554 "digest": "sha512", 00:12:25.554 "dhgroup": "ffdhe3072" 00:12:25.554 } 00:12:25.554 } 00:12:25.554 ]' 00:12:25.554 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.555 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:25.555 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.555 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:25.555 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.555 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.555 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.555 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.813 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:12:25.813 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:12:26.747 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.747 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:26.747 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.747 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.747 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.747 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.747 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:26.747 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:27.005 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:12:27.005 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.005 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:27.005 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:27.005 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:27.005 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.005 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key3 00:12:27.005 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.005 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.005 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.005 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:27.006 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:27.006 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:27.264 00:12:27.264 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:27.264 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:27.264 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.831 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.831 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.831 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.831 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.831 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.831 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:27.831 { 00:12:27.831 "cntlid": 119, 00:12:27.831 "qid": 0, 00:12:27.831 "state": "enabled", 00:12:27.831 "thread": "nvmf_tgt_poll_group_000", 00:12:27.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:27.832 "listen_address": { 00:12:27.832 "trtype": "TCP", 00:12:27.832 "adrfam": "IPv4", 00:12:27.832 "traddr": "10.0.0.3", 00:12:27.832 "trsvcid": "4420" 00:12:27.832 }, 00:12:27.832 "peer_address": { 00:12:27.832 "trtype": "TCP", 00:12:27.832 "adrfam": "IPv4", 00:12:27.832 "traddr": "10.0.0.1", 00:12:27.832 "trsvcid": "50354" 00:12:27.832 }, 00:12:27.832 "auth": { 00:12:27.832 "state": "completed", 00:12:27.832 "digest": "sha512", 00:12:27.832 "dhgroup": "ffdhe3072" 00:12:27.832 } 00:12:27.832 } 00:12:27.832 ]' 00:12:27.832 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.832 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:27.832 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.832 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:27.832 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.832 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.832 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.832 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.090 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:12:28.090 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:12:28.657 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.657 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:28.657 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.657 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.657 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.657 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:28.657 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.657 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:28.657 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:28.916 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:12:28.916 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.916 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:28.916 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:28.916 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:28.916 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.916 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.916 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.916 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.916 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.916 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.916 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.916 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.483 00:12:29.483 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:29.483 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.483 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.742 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.742 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.742 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.742 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.742 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.742 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.742 { 00:12:29.742 "cntlid": 121, 00:12:29.742 "qid": 0, 00:12:29.742 "state": "enabled", 00:12:29.742 "thread": "nvmf_tgt_poll_group_000", 00:12:29.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:29.742 "listen_address": { 00:12:29.742 "trtype": "TCP", 00:12:29.742 "adrfam": "IPv4", 00:12:29.742 "traddr": "10.0.0.3", 00:12:29.742 "trsvcid": "4420" 00:12:29.742 }, 00:12:29.742 "peer_address": { 00:12:29.742 "trtype": "TCP", 00:12:29.742 "adrfam": "IPv4", 00:12:29.742 "traddr": "10.0.0.1", 00:12:29.742 "trsvcid": "50386" 00:12:29.742 }, 00:12:29.742 "auth": { 00:12:29.742 "state": "completed", 00:12:29.742 "digest": "sha512", 00:12:29.742 "dhgroup": "ffdhe4096" 00:12:29.742 } 00:12:29.742 } 00:12:29.742 ]' 00:12:29.742 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.742 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:29.742 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.742 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:29.742 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:30.000 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.000 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.000 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.259 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:12:30.259 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:12:30.826 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.826 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:30.826 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.826 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.826 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.826 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.826 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:30.826 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:31.085 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:12:31.085 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:31.085 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:31.085 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:31.085 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:31.085 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.085 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.085 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.085 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.085 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.085 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.085 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.085 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.344 00:12:31.602 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.602 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.602 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.861 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.861 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.861 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.861 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.861 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.861 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.861 { 00:12:31.861 "cntlid": 123, 00:12:31.861 "qid": 0, 00:12:31.861 "state": "enabled", 00:12:31.861 "thread": "nvmf_tgt_poll_group_000", 00:12:31.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:31.861 "listen_address": { 00:12:31.861 "trtype": "TCP", 00:12:31.861 "adrfam": "IPv4", 00:12:31.861 "traddr": "10.0.0.3", 00:12:31.861 "trsvcid": "4420" 00:12:31.861 }, 00:12:31.861 "peer_address": { 00:12:31.861 "trtype": "TCP", 00:12:31.861 "adrfam": "IPv4", 00:12:31.861 "traddr": "10.0.0.1", 00:12:31.861 "trsvcid": "50406" 00:12:31.861 }, 00:12:31.861 "auth": { 00:12:31.861 "state": "completed", 00:12:31.861 "digest": "sha512", 00:12:31.861 "dhgroup": "ffdhe4096" 00:12:31.861 } 00:12:31.861 } 00:12:31.861 ]' 00:12:31.861 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.861 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:31.861 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.861 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:31.861 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.861 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.861 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.861 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.428 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:12:32.428 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:12:33.027 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.027 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:33.027 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.027 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.027 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.027 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:33.027 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:33.027 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:33.295 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:12:33.295 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.295 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:33.295 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:33.295 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:33.295 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.295 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.295 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.295 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.295 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.295 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.295 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.295 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.553 00:12:33.812 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.812 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.812 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.070 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.070 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.070 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.070 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.070 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.070 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:34.070 { 00:12:34.070 "cntlid": 125, 00:12:34.070 "qid": 0, 00:12:34.070 "state": "enabled", 00:12:34.070 "thread": "nvmf_tgt_poll_group_000", 00:12:34.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:34.070 "listen_address": { 00:12:34.070 "trtype": "TCP", 00:12:34.070 "adrfam": "IPv4", 00:12:34.070 "traddr": "10.0.0.3", 00:12:34.070 "trsvcid": "4420" 00:12:34.070 }, 00:12:34.070 "peer_address": { 00:12:34.070 "trtype": "TCP", 00:12:34.070 "adrfam": "IPv4", 00:12:34.070 "traddr": "10.0.0.1", 00:12:34.070 "trsvcid": "52332" 00:12:34.070 }, 00:12:34.070 "auth": { 00:12:34.070 "state": "completed", 00:12:34.070 "digest": "sha512", 00:12:34.070 "dhgroup": "ffdhe4096" 00:12:34.070 } 00:12:34.070 } 00:12:34.070 ]' 00:12:34.070 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:34.070 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:34.070 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:34.070 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:34.070 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:34.070 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.070 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.070 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.329 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:12:34.329 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:12:35.263 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.263 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:35.263 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.263 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.263 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.263 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.263 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:35.263 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:35.522 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:12:35.522 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.522 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:35.522 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:35.522 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:35.522 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.522 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key3 00:12:35.522 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.522 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.522 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.522 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:35.522 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:35.522 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:35.781 00:12:35.781 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.781 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.781 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.039 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.039 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.039 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.039 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.039 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.039 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.039 { 00:12:36.039 "cntlid": 127, 00:12:36.039 "qid": 0, 00:12:36.039 "state": "enabled", 00:12:36.039 "thread": "nvmf_tgt_poll_group_000", 00:12:36.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:36.039 "listen_address": { 00:12:36.039 "trtype": "TCP", 00:12:36.039 "adrfam": "IPv4", 00:12:36.039 "traddr": "10.0.0.3", 00:12:36.039 "trsvcid": "4420" 00:12:36.039 }, 00:12:36.039 "peer_address": { 00:12:36.039 "trtype": "TCP", 00:12:36.039 "adrfam": "IPv4", 00:12:36.039 "traddr": "10.0.0.1", 00:12:36.039 "trsvcid": "52366" 00:12:36.039 }, 00:12:36.039 "auth": { 00:12:36.039 "state": "completed", 00:12:36.039 "digest": "sha512", 00:12:36.039 "dhgroup": "ffdhe4096" 00:12:36.039 } 00:12:36.039 } 00:12:36.039 ]' 00:12:36.039 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.298 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:36.298 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.298 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:36.298 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.298 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.298 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.298 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.556 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:12:36.556 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:12:37.490 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.490 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:37.490 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.490 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.491 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.491 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:37.491 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.491 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:37.491 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:37.749 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:12:37.749 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.749 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:37.749 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:37.749 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:37.749 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.749 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.749 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.749 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.749 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.749 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.749 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.749 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:38.008 00:12:38.008 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.008 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.008 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.575 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.575 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.575 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.575 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.575 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.575 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.575 { 00:12:38.575 "cntlid": 129, 00:12:38.575 "qid": 0, 00:12:38.575 "state": "enabled", 00:12:38.575 "thread": "nvmf_tgt_poll_group_000", 00:12:38.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:38.575 "listen_address": { 00:12:38.575 "trtype": "TCP", 00:12:38.575 "adrfam": "IPv4", 00:12:38.575 "traddr": "10.0.0.3", 00:12:38.575 "trsvcid": "4420" 00:12:38.575 }, 00:12:38.575 "peer_address": { 00:12:38.575 "trtype": "TCP", 00:12:38.575 "adrfam": "IPv4", 00:12:38.575 "traddr": "10.0.0.1", 00:12:38.575 "trsvcid": "52390" 00:12:38.575 }, 00:12:38.575 "auth": { 00:12:38.575 "state": "completed", 00:12:38.575 "digest": "sha512", 00:12:38.575 "dhgroup": "ffdhe6144" 00:12:38.575 } 00:12:38.575 } 00:12:38.575 ]' 00:12:38.575 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.575 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:38.575 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.575 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:38.575 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.575 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.575 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.575 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.862 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:12:38.863 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:12:39.797 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.797 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:39.797 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.797 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.797 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.797 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.797 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:39.797 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:40.056 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:12:40.056 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.056 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:40.056 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:40.056 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:40.056 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.056 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:40.056 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.056 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.056 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.056 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:40.056 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:40.056 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:40.623 00:12:40.623 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.623 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.623 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.881 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.881 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.881 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.881 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.881 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.881 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.881 { 00:12:40.881 "cntlid": 131, 00:12:40.881 "qid": 0, 00:12:40.881 "state": "enabled", 00:12:40.881 "thread": "nvmf_tgt_poll_group_000", 00:12:40.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:40.881 "listen_address": { 00:12:40.881 "trtype": "TCP", 00:12:40.881 "adrfam": "IPv4", 00:12:40.881 "traddr": "10.0.0.3", 00:12:40.881 "trsvcid": "4420" 00:12:40.881 }, 00:12:40.881 "peer_address": { 00:12:40.881 "trtype": "TCP", 00:12:40.881 "adrfam": "IPv4", 00:12:40.881 "traddr": "10.0.0.1", 00:12:40.881 "trsvcid": "52422" 00:12:40.881 }, 00:12:40.881 "auth": { 00:12:40.881 "state": "completed", 00:12:40.881 "digest": "sha512", 00:12:40.881 "dhgroup": "ffdhe6144" 00:12:40.881 } 00:12:40.881 } 00:12:40.881 ]' 00:12:40.881 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.881 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:40.881 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.881 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:40.881 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.881 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.881 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.881 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.448 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:12:41.448 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:12:42.014 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.014 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:42.014 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.014 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.014 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.014 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:42.014 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:42.014 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:42.271 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:12:42.271 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.271 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:42.271 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:42.271 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:42.271 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.271 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:42.271 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.271 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.271 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.271 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:42.271 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:42.271 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:42.837 00:12:42.837 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.837 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.837 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:43.095 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.095 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.095 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.095 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.095 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.095 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:43.095 { 00:12:43.095 "cntlid": 133, 00:12:43.095 "qid": 0, 00:12:43.095 "state": "enabled", 00:12:43.095 "thread": "nvmf_tgt_poll_group_000", 00:12:43.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:43.095 "listen_address": { 00:12:43.095 "trtype": "TCP", 00:12:43.095 "adrfam": "IPv4", 00:12:43.095 "traddr": "10.0.0.3", 00:12:43.095 "trsvcid": "4420" 00:12:43.095 }, 00:12:43.095 "peer_address": { 00:12:43.095 "trtype": "TCP", 00:12:43.095 "adrfam": "IPv4", 00:12:43.095 "traddr": "10.0.0.1", 00:12:43.095 "trsvcid": "52450" 00:12:43.095 }, 00:12:43.095 "auth": { 00:12:43.095 "state": "completed", 00:12:43.095 "digest": "sha512", 00:12:43.095 "dhgroup": "ffdhe6144" 00:12:43.095 } 00:12:43.095 } 00:12:43.095 ]' 00:12:43.095 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:43.095 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:43.095 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:43.095 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:43.095 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.095 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.095 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.095 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.354 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:12:43.354 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:12:44.350 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.350 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:44.350 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.350 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.350 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.350 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.350 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:44.350 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:44.350 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:12:44.350 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.350 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:44.350 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:44.350 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:44.350 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.350 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key3 00:12:44.350 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.350 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.350 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.350 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:44.350 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:44.350 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:44.917 00:12:44.917 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.917 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.917 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.175 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.175 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.175 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.175 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.175 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.175 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:45.175 { 00:12:45.175 "cntlid": 135, 00:12:45.175 "qid": 0, 00:12:45.175 "state": "enabled", 00:12:45.175 "thread": "nvmf_tgt_poll_group_000", 00:12:45.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:45.175 "listen_address": { 00:12:45.175 "trtype": "TCP", 00:12:45.175 "adrfam": "IPv4", 00:12:45.175 "traddr": "10.0.0.3", 00:12:45.175 "trsvcid": "4420" 00:12:45.176 }, 00:12:45.176 "peer_address": { 00:12:45.176 "trtype": "TCP", 00:12:45.176 "adrfam": "IPv4", 00:12:45.176 "traddr": "10.0.0.1", 00:12:45.176 "trsvcid": "35620" 00:12:45.176 }, 00:12:45.176 "auth": { 00:12:45.176 "state": "completed", 00:12:45.176 "digest": "sha512", 00:12:45.176 "dhgroup": "ffdhe6144" 00:12:45.176 } 00:12:45.176 } 00:12:45.176 ]' 00:12:45.176 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:45.434 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:45.434 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.434 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:45.434 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:45.434 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.434 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.434 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.693 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:12:45.693 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:12:46.260 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.260 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:46.260 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.260 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.260 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.260 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:46.260 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:46.260 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:46.260 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:46.828 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:12:46.828 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.828 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:46.828 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:46.828 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:46.828 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.828 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:46.828 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.828 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.828 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.828 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:46.828 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:46.828 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.395 00:12:47.395 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.395 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.395 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.653 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.653 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.653 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.654 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.654 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.654 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.654 { 00:12:47.654 "cntlid": 137, 00:12:47.654 "qid": 0, 00:12:47.654 "state": "enabled", 00:12:47.654 "thread": "nvmf_tgt_poll_group_000", 00:12:47.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:47.654 "listen_address": { 00:12:47.654 "trtype": "TCP", 00:12:47.654 "adrfam": "IPv4", 00:12:47.654 "traddr": "10.0.0.3", 00:12:47.654 "trsvcid": "4420" 00:12:47.654 }, 00:12:47.654 "peer_address": { 00:12:47.654 "trtype": "TCP", 00:12:47.654 "adrfam": "IPv4", 00:12:47.654 "traddr": "10.0.0.1", 00:12:47.654 "trsvcid": "35654" 00:12:47.654 }, 00:12:47.654 "auth": { 00:12:47.654 "state": "completed", 00:12:47.654 "digest": "sha512", 00:12:47.654 "dhgroup": "ffdhe8192" 00:12:47.654 } 00:12:47.654 } 00:12:47.654 ]' 00:12:47.654 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.654 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:47.654 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.913 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:47.913 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.913 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.913 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.913 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.171 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:12:48.171 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:12:48.738 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.738 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:48.738 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.738 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.997 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.997 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.997 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:48.997 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:49.256 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:12:49.256 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:49.256 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:49.256 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:49.256 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:49.256 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.256 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.256 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.256 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.256 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.256 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.256 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.256 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.824 00:12:49.824 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:49.824 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.824 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.393 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.393 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.393 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.393 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.393 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.393 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:50.393 { 00:12:50.393 "cntlid": 139, 00:12:50.393 "qid": 0, 00:12:50.393 "state": "enabled", 00:12:50.393 "thread": "nvmf_tgt_poll_group_000", 00:12:50.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:50.393 "listen_address": { 00:12:50.393 "trtype": "TCP", 00:12:50.393 "adrfam": "IPv4", 00:12:50.393 "traddr": "10.0.0.3", 00:12:50.393 "trsvcid": "4420" 00:12:50.393 }, 00:12:50.393 "peer_address": { 00:12:50.393 "trtype": "TCP", 00:12:50.393 "adrfam": "IPv4", 00:12:50.393 "traddr": "10.0.0.1", 00:12:50.393 "trsvcid": "35682" 00:12:50.393 }, 00:12:50.393 "auth": { 00:12:50.393 "state": "completed", 00:12:50.393 "digest": "sha512", 00:12:50.393 "dhgroup": "ffdhe8192" 00:12:50.393 } 00:12:50.393 } 00:12:50.393 ]' 00:12:50.393 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:50.393 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:50.393 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:50.393 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:50.393 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:50.393 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.393 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.393 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.652 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:12:50.652 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: --dhchap-ctrl-secret DHHC-1:02:OGE3YzI5OGEzMzg5ZWEwZTY3NmNhODE2MDNjYjU4MDU1YmE1NjJmNDgxOWU5MDUwQj7MMw==: 00:12:51.219 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.219 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:51.219 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.219 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.495 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.495 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:51.495 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:51.495 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:51.754 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:12:51.754 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:51.754 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:51.754 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:51.754 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:51.754 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.754 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.754 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.754 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.754 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.754 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.754 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.754 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.321 00:12:52.321 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:52.321 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.321 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.579 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.579 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.579 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.579 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.579 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.579 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:52.579 { 00:12:52.579 "cntlid": 141, 00:12:52.579 "qid": 0, 00:12:52.579 "state": "enabled", 00:12:52.579 "thread": "nvmf_tgt_poll_group_000", 00:12:52.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:52.579 "listen_address": { 00:12:52.579 "trtype": "TCP", 00:12:52.579 "adrfam": "IPv4", 00:12:52.579 "traddr": "10.0.0.3", 00:12:52.579 "trsvcid": "4420" 00:12:52.579 }, 00:12:52.579 "peer_address": { 00:12:52.579 "trtype": "TCP", 00:12:52.579 "adrfam": "IPv4", 00:12:52.579 "traddr": "10.0.0.1", 00:12:52.579 "trsvcid": "35706" 00:12:52.579 }, 00:12:52.579 "auth": { 00:12:52.579 "state": "completed", 00:12:52.579 "digest": "sha512", 00:12:52.579 "dhgroup": "ffdhe8192" 00:12:52.579 } 00:12:52.579 } 00:12:52.579 ]' 00:12:52.579 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:52.579 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:52.580 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:52.838 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:52.838 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:52.838 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.838 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.838 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.097 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:12:53.097 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:01:MDI0MTFiMGM3NDVlNzMyMTE2MGMwMDE2MDQ3Njc1OWWI50DC: 00:12:53.666 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.666 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:53.666 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.666 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.926 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.926 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:53.926 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:53.926 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:54.185 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:12:54.185 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.185 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:54.185 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:54.185 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:54.185 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.185 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key3 00:12:54.185 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.185 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.185 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.185 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:54.185 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:54.185 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:54.791 00:12:54.791 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:54.791 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.791 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.089 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.089 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.089 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.089 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.089 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.089 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.089 { 00:12:55.089 "cntlid": 143, 00:12:55.089 "qid": 0, 00:12:55.089 "state": "enabled", 00:12:55.089 "thread": "nvmf_tgt_poll_group_000", 00:12:55.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:55.089 "listen_address": { 00:12:55.089 "trtype": "TCP", 00:12:55.089 "adrfam": "IPv4", 00:12:55.089 "traddr": "10.0.0.3", 00:12:55.089 "trsvcid": "4420" 00:12:55.089 }, 00:12:55.089 "peer_address": { 00:12:55.089 "trtype": "TCP", 00:12:55.089 "adrfam": "IPv4", 00:12:55.089 "traddr": "10.0.0.1", 00:12:55.089 "trsvcid": "40262" 00:12:55.089 }, 00:12:55.089 "auth": { 00:12:55.089 "state": "completed", 00:12:55.089 "digest": "sha512", 00:12:55.089 "dhgroup": "ffdhe8192" 00:12:55.089 } 00:12:55.089 } 00:12:55.089 ]' 00:12:55.089 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.089 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:55.089 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.089 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:55.089 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.089 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.089 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.089 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.348 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:12:55.348 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:12:56.286 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.286 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:56.286 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.286 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.286 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.286 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:56.286 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:12:56.286 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:56.286 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:56.286 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:56.286 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:56.545 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:12:56.545 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.546 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:56.546 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:56.546 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:56.546 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.546 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.546 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.546 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.546 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.546 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.546 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.546 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:57.112 00:12:57.112 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:57.112 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.112 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:57.370 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.370 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.370 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.370 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.370 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.370 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.370 { 00:12:57.370 "cntlid": 145, 00:12:57.370 "qid": 0, 00:12:57.370 "state": "enabled", 00:12:57.370 "thread": "nvmf_tgt_poll_group_000", 00:12:57.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:57.370 "listen_address": { 00:12:57.370 "trtype": "TCP", 00:12:57.370 "adrfam": "IPv4", 00:12:57.370 "traddr": "10.0.0.3", 00:12:57.370 "trsvcid": "4420" 00:12:57.370 }, 00:12:57.370 "peer_address": { 00:12:57.370 "trtype": "TCP", 00:12:57.370 "adrfam": "IPv4", 00:12:57.370 "traddr": "10.0.0.1", 00:12:57.370 "trsvcid": "40284" 00:12:57.370 }, 00:12:57.370 "auth": { 00:12:57.370 "state": "completed", 00:12:57.370 "digest": "sha512", 00:12:57.370 "dhgroup": "ffdhe8192" 00:12:57.370 } 00:12:57.370 } 00:12:57.370 ]' 00:12:57.370 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.370 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:57.370 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.371 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:57.371 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.630 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.630 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.630 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.890 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:12:57.890 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:00:NWM0NDY4ZWUyNWFjZTY5M2RkZGJmYjBiYjViODg4Mzc3ZjE1YTVjYThiMTg4NGRmS4JpFA==: --dhchap-ctrl-secret DHHC-1:03:M2IxOWZhMzBkNjFmMGRiNmI5NjQxZWFiMWVlNWM1NGUxYmY2NWNmNTg3MjQ1YzEyNDk2N2NkYWVhYTg1MGRhMODCDWs=: 00:12:58.458 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.458 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:58.458 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.458 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.458 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.458 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 00:12:58.458 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.458 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.458 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.458 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:12:58.458 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:58.458 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:12:58.458 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:58.458 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.458 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:58.458 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.458 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:12:58.458 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:58.458 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:59.026 request: 00:12:59.026 { 00:12:59.026 "name": "nvme0", 00:12:59.026 "trtype": "tcp", 00:12:59.026 "traddr": "10.0.0.3", 00:12:59.026 "adrfam": "ipv4", 00:12:59.026 "trsvcid": "4420", 00:12:59.026 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:59.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:59.026 "prchk_reftag": false, 00:12:59.026 "prchk_guard": false, 00:12:59.026 "hdgst": false, 00:12:59.026 "ddgst": false, 00:12:59.026 "dhchap_key": "key2", 00:12:59.026 "allow_unrecognized_csi": false, 00:12:59.026 "method": "bdev_nvme_attach_controller", 00:12:59.026 "req_id": 1 00:12:59.026 } 00:12:59.026 Got JSON-RPC error response 00:12:59.026 response: 00:12:59.026 { 00:12:59.026 "code": -5, 00:12:59.026 "message": "Input/output error" 00:12:59.026 } 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:59.026 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:59.594 request: 00:12:59.594 { 00:12:59.594 "name": "nvme0", 00:12:59.594 "trtype": "tcp", 00:12:59.594 "traddr": "10.0.0.3", 00:12:59.594 "adrfam": "ipv4", 00:12:59.594 "trsvcid": "4420", 00:12:59.594 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:59.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:12:59.594 "prchk_reftag": false, 00:12:59.594 "prchk_guard": false, 00:12:59.594 "hdgst": false, 00:12:59.594 "ddgst": false, 00:12:59.594 "dhchap_key": "key1", 00:12:59.594 "dhchap_ctrlr_key": "ckey2", 00:12:59.594 "allow_unrecognized_csi": false, 00:12:59.594 "method": "bdev_nvme_attach_controller", 00:12:59.594 "req_id": 1 00:12:59.594 } 00:12:59.594 Got JSON-RPC error response 00:12:59.594 response: 00:12:59.594 { 00:12:59.594 "code": -5, 00:12:59.594 "message": "Input/output error" 00:12:59.594 } 00:12:59.594 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:59.594 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:59.594 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:59.594 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:59.594 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:12:59.594 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.594 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.852 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.852 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 00:12:59.852 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.852 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.852 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.852 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.852 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:59.852 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.852 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:59.852 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.852 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:59.852 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.852 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.852 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.852 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.419 request: 00:13:00.419 { 00:13:00.419 "name": "nvme0", 00:13:00.419 "trtype": "tcp", 00:13:00.419 "traddr": "10.0.0.3", 00:13:00.419 "adrfam": "ipv4", 00:13:00.419 "trsvcid": "4420", 00:13:00.419 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:00.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:13:00.419 "prchk_reftag": false, 00:13:00.419 "prchk_guard": false, 00:13:00.419 "hdgst": false, 00:13:00.419 "ddgst": false, 00:13:00.419 "dhchap_key": "key1", 00:13:00.419 "dhchap_ctrlr_key": "ckey1", 00:13:00.419 "allow_unrecognized_csi": false, 00:13:00.419 "method": "bdev_nvme_attach_controller", 00:13:00.419 "req_id": 1 00:13:00.419 } 00:13:00.419 Got JSON-RPC error response 00:13:00.419 response: 00:13:00.419 { 00:13:00.419 "code": -5, 00:13:00.419 "message": "Input/output error" 00:13:00.419 } 00:13:00.419 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:00.419 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:00.419 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:00.419 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:00.419 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:13:00.419 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.419 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.419 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.419 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 68453 00:13:00.419 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 68453 ']' 00:13:00.419 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 68453 00:13:00.419 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:00.419 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.419 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68453 00:13:00.419 killing process with pid 68453 00:13:00.419 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:00.419 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:00.419 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68453' 00:13:00.419 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 68453 00:13:00.419 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 68453 00:13:00.677 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:00.677 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:00.677 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:00.677 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.677 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=71540 00:13:00.677 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:00.677 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 71540 00:13:00.677 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 71540 ']' 00:13:00.677 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.677 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.677 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.677 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.677 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.936 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.936 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:00.936 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:00.936 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:00.936 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.936 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.936 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:00.936 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 71540 00:13:00.936 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 71540 ']' 00:13:00.936 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.936 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.936 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.936 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.936 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.195 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.195 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:01.195 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:01.195 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.195 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.454 null0 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.KgS 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Jpo ]] 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Jpo 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Py4 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.k8R ]] 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k8R 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.xqV 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.ZW4 ]] 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZW4 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.yZW 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key3 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:01.454 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:01.455 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:02.388 nvme0n1 00:13:02.388 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.388 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.388 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.955 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.955 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.955 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.955 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.955 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.955 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.955 { 00:13:02.955 "cntlid": 1, 00:13:02.955 "qid": 0, 00:13:02.955 "state": "enabled", 00:13:02.955 "thread": "nvmf_tgt_poll_group_000", 00:13:02.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:13:02.955 "listen_address": { 00:13:02.955 "trtype": "TCP", 00:13:02.955 "adrfam": "IPv4", 00:13:02.955 "traddr": "10.0.0.3", 00:13:02.955 "trsvcid": "4420" 00:13:02.955 }, 00:13:02.955 "peer_address": { 00:13:02.955 "trtype": "TCP", 00:13:02.955 "adrfam": "IPv4", 00:13:02.955 "traddr": "10.0.0.1", 00:13:02.955 "trsvcid": "40336" 00:13:02.955 }, 00:13:02.955 "auth": { 00:13:02.955 "state": "completed", 00:13:02.955 "digest": "sha512", 00:13:02.955 "dhgroup": "ffdhe8192" 00:13:02.955 } 00:13:02.955 } 00:13:02.955 ]' 00:13:02.955 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.955 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:02.955 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.955 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:02.955 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.955 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.955 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.955 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.213 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:13:03.213 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:13:03.780 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.038 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:13:04.038 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.038 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.038 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.038 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key3 00:13:04.038 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.038 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.038 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.038 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:04.038 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:04.296 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:04.296 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:04.296 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:04.296 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:04.296 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.296 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:04.296 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.296 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:04.297 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:04.297 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:04.555 request: 00:13:04.555 { 00:13:04.555 "name": "nvme0", 00:13:04.555 "trtype": "tcp", 00:13:04.555 "traddr": "10.0.0.3", 00:13:04.555 "adrfam": "ipv4", 00:13:04.555 "trsvcid": "4420", 00:13:04.555 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:04.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:13:04.555 "prchk_reftag": false, 00:13:04.555 "prchk_guard": false, 00:13:04.555 "hdgst": false, 00:13:04.555 "ddgst": false, 00:13:04.555 "dhchap_key": "key3", 00:13:04.555 "allow_unrecognized_csi": false, 00:13:04.555 "method": "bdev_nvme_attach_controller", 00:13:04.555 "req_id": 1 00:13:04.555 } 00:13:04.555 Got JSON-RPC error response 00:13:04.555 response: 00:13:04.555 { 00:13:04.555 "code": -5, 00:13:04.555 "message": "Input/output error" 00:13:04.555 } 00:13:04.555 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:04.555 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:04.555 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:04.555 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:04.555 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:04.555 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:04.555 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:04.555 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:04.813 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:04.814 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:04.814 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:04.814 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:04.814 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.814 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:04.814 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.814 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:04.814 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:04.814 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:05.072 request: 00:13:05.072 { 00:13:05.072 "name": "nvme0", 00:13:05.072 "trtype": "tcp", 00:13:05.072 "traddr": "10.0.0.3", 00:13:05.072 "adrfam": "ipv4", 00:13:05.072 "trsvcid": "4420", 00:13:05.072 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:05.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:13:05.073 "prchk_reftag": false, 00:13:05.073 "prchk_guard": false, 00:13:05.073 "hdgst": false, 00:13:05.073 "ddgst": false, 00:13:05.073 "dhchap_key": "key3", 00:13:05.073 "allow_unrecognized_csi": false, 00:13:05.073 "method": "bdev_nvme_attach_controller", 00:13:05.073 "req_id": 1 00:13:05.073 } 00:13:05.073 Got JSON-RPC error response 00:13:05.073 response: 00:13:05.073 { 00:13:05.073 "code": -5, 00:13:05.073 "message": "Input/output error" 00:13:05.073 } 00:13:05.073 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:05.073 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:05.073 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:05.073 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:05.073 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:05.073 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:05.073 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:05.073 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:05.073 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:05.073 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:05.331 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:13:05.331 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.331 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.331 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.331 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:13:05.331 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.331 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.331 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.331 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:05.331 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:05.331 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:05.331 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:05.331 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.331 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:05.331 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.331 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:05.332 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:05.332 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:05.899 request: 00:13:05.899 { 00:13:05.899 "name": "nvme0", 00:13:05.899 "trtype": "tcp", 00:13:05.899 "traddr": "10.0.0.3", 00:13:05.899 "adrfam": "ipv4", 00:13:05.899 "trsvcid": "4420", 00:13:05.899 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:05.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:13:05.899 "prchk_reftag": false, 00:13:05.899 "prchk_guard": false, 00:13:05.899 "hdgst": false, 00:13:05.899 "ddgst": false, 00:13:05.899 "dhchap_key": "key0", 00:13:05.899 "dhchap_ctrlr_key": "key1", 00:13:05.899 "allow_unrecognized_csi": false, 00:13:05.899 "method": "bdev_nvme_attach_controller", 00:13:05.899 "req_id": 1 00:13:05.899 } 00:13:05.899 Got JSON-RPC error response 00:13:05.899 response: 00:13:05.899 { 00:13:05.899 "code": -5, 00:13:05.899 "message": "Input/output error" 00:13:05.899 } 00:13:05.899 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:05.899 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:05.899 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:05.899 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:05.899 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:05.900 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:05.900 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:06.158 nvme0n1 00:13:06.158 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:06.158 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.158 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:06.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.984 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 00:13:06.984 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.984 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.984 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.984 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:06.984 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:06.984 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:07.920 nvme0n1 00:13:07.920 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:13:07.920 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:13:07.920 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.179 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.179 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:08.179 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.179 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.179 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.179 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:13:08.179 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.179 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:13:08.441 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.441 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:13:08.441 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid 5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -l 0 --dhchap-secret DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: --dhchap-ctrl-secret DHHC-1:03:OTcwMzg5ZTM3OTJkYjQ3N2ZlZGMzNTliMjcxYzk3MzdmNTA3OWU2NGNmNDczNDAyNTc0YWZkMTliNzBhOTEzMo6hIs8=: 00:13:09.376 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:13:09.376 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:13:09.376 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:13:09.376 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:13:09.376 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:13:09.376 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:13:09.376 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:13:09.376 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.376 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.671 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:13:09.671 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:09.671 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:13:09.671 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:09.671 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:09.671 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:09.671 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:09.671 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:09.671 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:09.671 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:10.239 request: 00:13:10.239 { 00:13:10.239 "name": "nvme0", 00:13:10.239 "trtype": "tcp", 00:13:10.239 "traddr": "10.0.0.3", 00:13:10.239 "adrfam": "ipv4", 00:13:10.239 "trsvcid": "4420", 00:13:10.239 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:10.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5", 00:13:10.239 "prchk_reftag": false, 00:13:10.239 "prchk_guard": false, 00:13:10.239 "hdgst": false, 00:13:10.239 "ddgst": false, 00:13:10.239 "dhchap_key": "key1", 00:13:10.239 "allow_unrecognized_csi": false, 00:13:10.239 "method": "bdev_nvme_attach_controller", 00:13:10.239 "req_id": 1 00:13:10.239 } 00:13:10.239 Got JSON-RPC error response 00:13:10.239 response: 00:13:10.239 { 00:13:10.239 "code": -5, 00:13:10.239 "message": "Input/output error" 00:13:10.239 } 00:13:10.239 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:10.239 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:10.239 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:10.239 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:10.239 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:10.239 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:10.239 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:11.175 nvme0n1 00:13:11.175 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:13:11.175 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.175 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:13:11.434 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.434 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.434 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.756 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:13:11.756 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.756 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.756 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.756 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:13:11.756 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:11.756 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:12.015 nvme0n1 00:13:12.015 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:13:12.015 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:13:12.015 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.282 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.282 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.282 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.542 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:12.542 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.542 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.543 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.543 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: '' 2s 00:13:12.543 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:12.543 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:12.543 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: 00:13:12.543 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:13:12.543 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:12.543 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:12.543 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: ]] 00:13:12.543 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NDlhOGMxZTI5OWFhMDE0YWViZjE1NzhkM2FlZTMxNjlH5q52: 00:13:12.543 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:13:12.543 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:12.543 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key1 --dhchap-ctrlr-key key2 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: 2s 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: ]] 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NmM5ZDZmNGVmZmI4ZGUyYmI1YmNmM2Y5YWY0YzZkM2I1YTc0OTQ5NTY2MjcyNzY3jvzGVA==: 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:15.077 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:16.981 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:13:16.981 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:13:16.981 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:16.981 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:16.981 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:16.981 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:13:16.981 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:13:16.981 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.981 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:16.981 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.981 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.981 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.981 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:16.981 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:16.981 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:17.917 nvme0n1 00:13:17.917 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:17.917 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.917 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.917 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.917 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:17.917 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:18.485 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:13:18.485 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.485 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:13:18.744 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.744 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:13:18.744 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.744 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.744 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.744 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:13:18.744 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:13:19.003 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:13:19.003 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:13:19.003 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.262 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.262 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:19.262 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.262 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.262 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.262 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:19.262 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:19.262 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:19.262 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:13:19.262 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.262 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:13:19.262 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.262 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:19.262 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:20.199 request: 00:13:20.199 { 00:13:20.199 "name": "nvme0", 00:13:20.199 "dhchap_key": "key1", 00:13:20.199 "dhchap_ctrlr_key": "key3", 00:13:20.199 "method": "bdev_nvme_set_keys", 00:13:20.199 "req_id": 1 00:13:20.199 } 00:13:20.199 Got JSON-RPC error response 00:13:20.199 response: 00:13:20.199 { 00:13:20.199 "code": -13, 00:13:20.199 "message": "Permission denied" 00:13:20.199 } 00:13:20.199 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:20.199 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:20.199 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:20.199 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:20.199 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:20.199 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:20.199 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.199 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:13:20.199 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:13:21.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:21.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:21.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:13:21.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:21.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:21.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:21.583 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:22.520 nvme0n1 00:13:22.520 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:22.520 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.520 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.779 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.779 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:22.779 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:22.779 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:22.779 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:13:22.779 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:22.779 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:13:22.779 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:22.779 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:22.779 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:23.346 request: 00:13:23.346 { 00:13:23.346 "name": "nvme0", 00:13:23.346 "dhchap_key": "key2", 00:13:23.346 "dhchap_ctrlr_key": "key0", 00:13:23.346 "method": "bdev_nvme_set_keys", 00:13:23.346 "req_id": 1 00:13:23.346 } 00:13:23.346 Got JSON-RPC error response 00:13:23.346 response: 00:13:23.346 { 00:13:23.346 "code": -13, 00:13:23.346 "message": "Permission denied" 00:13:23.346 } 00:13:23.346 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:23.346 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:23.346 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:23.346 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:23.346 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:23.346 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.346 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:23.605 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:13:23.605 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:13:24.541 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:24.541 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.541 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:24.799 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:13:24.799 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:13:24.799 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:13:24.799 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 68472 00:13:24.799 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 68472 ']' 00:13:24.799 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 68472 00:13:24.799 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:24.799 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:24.799 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68472 00:13:25.058 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:25.058 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:25.058 killing process with pid 68472 00:13:25.058 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68472' 00:13:25.058 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 68472 00:13:25.059 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 68472 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:25.627 rmmod nvme_tcp 00:13:25.627 rmmod nvme_fabrics 00:13:25.627 rmmod nvme_keyring 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 71540 ']' 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 71540 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 71540 ']' 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 71540 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71540 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71540' 00:13:25.627 killing process with pid 71540 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 71540 00:13:25.627 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 71540 00:13:25.886 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:25.886 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:25.886 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:25.886 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:13:25.886 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:13:25.886 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:25.886 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:25.886 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:25.886 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:25.886 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:25.886 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:25.886 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:25.886 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:25.886 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:25.886 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:25.886 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:25.886 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:25.886 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:25.886 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:26.145 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:26.145 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:26.145 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:26.145 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:26.145 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.145 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.145 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.145 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:13:26.145 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.KgS /tmp/spdk.key-sha256.Py4 /tmp/spdk.key-sha384.xqV /tmp/spdk.key-sha512.yZW /tmp/spdk.key-sha512.Jpo /tmp/spdk.key-sha384.k8R /tmp/spdk.key-sha256.ZW4 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:26.145 00:13:26.145 real 3m12.845s 00:13:26.145 user 7m43.081s 00:13:26.145 sys 0m29.676s 00:13:26.145 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:26.145 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.145 ************************************ 00:13:26.145 END TEST nvmf_auth_target 00:13:26.145 ************************************ 00:13:26.145 13:54:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:13:26.145 13:54:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:26.145 13:54:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:26.145 13:54:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:26.145 13:54:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:26.145 ************************************ 00:13:26.145 START TEST nvmf_bdevio_no_huge 00:13:26.145 ************************************ 00:13:26.145 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:26.145 * Looking for test storage... 00:13:26.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:26.145 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:26.145 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:13:26.145 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:26.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.406 --rc genhtml_branch_coverage=1 00:13:26.406 --rc genhtml_function_coverage=1 00:13:26.406 --rc genhtml_legend=1 00:13:26.406 --rc geninfo_all_blocks=1 00:13:26.406 --rc geninfo_unexecuted_blocks=1 00:13:26.406 00:13:26.406 ' 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:26.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.406 --rc genhtml_branch_coverage=1 00:13:26.406 --rc genhtml_function_coverage=1 00:13:26.406 --rc genhtml_legend=1 00:13:26.406 --rc geninfo_all_blocks=1 00:13:26.406 --rc geninfo_unexecuted_blocks=1 00:13:26.406 00:13:26.406 ' 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:26.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.406 --rc genhtml_branch_coverage=1 00:13:26.406 --rc genhtml_function_coverage=1 00:13:26.406 --rc genhtml_legend=1 00:13:26.406 --rc geninfo_all_blocks=1 00:13:26.406 --rc geninfo_unexecuted_blocks=1 00:13:26.406 00:13:26.406 ' 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:26.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.406 --rc genhtml_branch_coverage=1 00:13:26.406 --rc genhtml_function_coverage=1 00:13:26.406 --rc genhtml_legend=1 00:13:26.406 --rc geninfo_all_blocks=1 00:13:26.406 --rc geninfo_unexecuted_blocks=1 00:13:26.406 00:13:26.406 ' 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:26.406 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:26.407 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:26.407 Cannot find device "nvmf_init_br" 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:26.407 Cannot find device "nvmf_init_br2" 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:26.407 Cannot find device "nvmf_tgt_br" 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:26.407 Cannot find device "nvmf_tgt_br2" 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:26.407 Cannot find device "nvmf_init_br" 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:26.407 Cannot find device "nvmf_init_br2" 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:26.407 Cannot find device "nvmf_tgt_br" 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:26.407 Cannot find device "nvmf_tgt_br2" 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:26.407 Cannot find device "nvmf_br" 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:26.407 Cannot find device "nvmf_init_if" 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:26.407 Cannot find device "nvmf_init_if2" 00:13:26.407 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:13:26.680 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:26.680 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:26.680 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:13:26.680 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:26.680 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:26.680 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:13:26.680 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:26.680 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:26.680 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:26.680 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:26.680 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:26.680 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:26.680 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:26.680 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:26.680 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:26.680 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:26.680 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:26.680 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:26.680 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:26.680 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:26.680 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:26.681 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:26.681 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:13:26.681 00:13:26.681 --- 10.0.0.3 ping statistics --- 00:13:26.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.681 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:26.681 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:26.681 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:13:26.681 00:13:26.681 --- 10.0.0.4 ping statistics --- 00:13:26.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.681 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:26.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:13:26.681 00:13:26.681 --- 10.0.0.1 ping statistics --- 00:13:26.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.681 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:26.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:13:26.681 00:13:26.681 --- 10.0.0.2 ping statistics --- 00:13:26.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.681 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:26.681 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:26.954 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=72192 00:13:26.954 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:26.954 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 72192 00:13:26.954 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 72192 ']' 00:13:26.954 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.954 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:26.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.954 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.954 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:26.954 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:26.954 [2024-12-11 13:54:19.780629] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:13:26.954 [2024-12-11 13:54:19.780768] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:26.954 [2024-12-11 13:54:19.948769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:27.213 [2024-12-11 13:54:20.018070] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.213 [2024-12-11 13:54:20.018165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.213 [2024-12-11 13:54:20.018176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.213 [2024-12-11 13:54:20.018184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.213 [2024-12-11 13:54:20.018191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.213 [2024-12-11 13:54:20.018804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:13:27.213 [2024-12-11 13:54:20.019476] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:13:27.213 [2024-12-11 13:54:20.019626] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:13:27.213 [2024-12-11 13:54:20.019678] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:13:27.213 [2024-12-11 13:54:20.024224] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:27.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:13:27.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:27.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:27.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:27.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:27.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:27.781 [2024-12-11 13:54:20.823629] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:28.040 Malloc0 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:28.040 [2024-12-11 13:54:20.874357] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:28.040 { 00:13:28.040 "params": { 00:13:28.040 "name": "Nvme$subsystem", 00:13:28.040 "trtype": "$TEST_TRANSPORT", 00:13:28.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:28.040 "adrfam": "ipv4", 00:13:28.040 "trsvcid": "$NVMF_PORT", 00:13:28.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:28.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:28.040 "hdgst": ${hdgst:-false}, 00:13:28.040 "ddgst": ${ddgst:-false} 00:13:28.040 }, 00:13:28.040 "method": "bdev_nvme_attach_controller" 00:13:28.040 } 00:13:28.040 EOF 00:13:28.040 )") 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:13:28.040 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:28.040 "params": { 00:13:28.040 "name": "Nvme1", 00:13:28.040 "trtype": "tcp", 00:13:28.040 "traddr": "10.0.0.3", 00:13:28.040 "adrfam": "ipv4", 00:13:28.040 "trsvcid": "4420", 00:13:28.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:28.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:28.041 "hdgst": false, 00:13:28.041 "ddgst": false 00:13:28.041 }, 00:13:28.041 "method": "bdev_nvme_attach_controller" 00:13:28.041 }' 00:13:28.041 [2024-12-11 13:54:20.939927] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:13:28.041 [2024-12-11 13:54:20.940060] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72228 ] 00:13:28.299 [2024-12-11 13:54:21.100971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:28.299 [2024-12-11 13:54:21.186126] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.299 [2024-12-11 13:54:21.186211] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.299 [2024-12-11 13:54:21.186220] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.299 [2024-12-11 13:54:21.200896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:28.558 I/O targets: 00:13:28.558 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:28.558 00:13:28.558 00:13:28.558 CUnit - A unit testing framework for C - Version 2.1-3 00:13:28.558 http://cunit.sourceforge.net/ 00:13:28.558 00:13:28.558 00:13:28.558 Suite: bdevio tests on: Nvme1n1 00:13:28.558 Test: blockdev write read block ...passed 00:13:28.558 Test: blockdev write zeroes read block ...passed 00:13:28.558 Test: blockdev write zeroes read no split ...passed 00:13:28.558 Test: blockdev write zeroes read split ...passed 00:13:28.558 Test: blockdev write zeroes read split partial ...passed 00:13:28.558 Test: blockdev reset ...[2024-12-11 13:54:21.444286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:28.558 [2024-12-11 13:54:21.444396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f9b720 (9): Bad file descriptor 00:13:28.558 passed 00:13:28.558 Test: blockdev write read 8 blocks ...[2024-12-11 13:54:21.457978] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:28.558 passed 00:13:28.558 Test: blockdev write read size > 128k ...passed 00:13:28.558 Test: blockdev write read invalid size ...passed 00:13:28.558 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:28.558 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:28.558 Test: blockdev write read max offset ...passed 00:13:28.558 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:28.558 Test: blockdev writev readv 8 blocks ...passed 00:13:28.558 Test: blockdev writev readv 30 x 1block ...passed 00:13:28.558 Test: blockdev writev readv block ...passed 00:13:28.558 Test: blockdev writev readv size > 128k ...passed 00:13:28.558 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:28.558 Test: blockdev comparev and writev ...[2024-12-11 13:54:21.468410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:28.558 [2024-12-11 13:54:21.468459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:28.558 [2024-12-11 13:54:21.468485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:28.558 [2024-12-11 13:54:21.468499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:28.558 [2024-12-11 13:54:21.468868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:28.558 [2024-12-11 13:54:21.468896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:28.558 [2024-12-11 13:54:21.468918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:28.558 [2024-12-11 13:54:21.468930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:28.558 [2024-12-11 13:54:21.469385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:28.558 [2024-12-11 13:54:21.469424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:28.558 [2024-12-11 13:54:21.469454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:28.558 [2024-12-11 13:54:21.469466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:28.558 [2024-12-11 13:54:21.469800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:28.558 [2024-12-11 13:54:21.469827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:28.558 [2024-12-11 13:54:21.469849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:28.558 [2024-12-11 13:54:21.469861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:28.558 passed 00:13:28.558 Test: blockdev nvme passthru rw ...passed 00:13:28.558 Test: blockdev nvme passthru vendor specific ...passed 00:13:28.558 Test: blockdev nvme admin passthru ...[2024-12-11 13:54:21.471227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:28.558 [2024-12-11 13:54:21.471268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:28.558 [2024-12-11 13:54:21.471392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:28.558 [2024-12-11 13:54:21.471418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:28.558 [2024-12-11 13:54:21.471540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:28.558 [2024-12-11 13:54:21.471564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:28.559 [2024-12-11 13:54:21.471671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:28.559 [2024-12-11 13:54:21.471695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:28.559 passed 00:13:28.559 Test: blockdev copy ...passed 00:13:28.559 00:13:28.559 Run Summary: Type Total Ran Passed Failed Inactive 00:13:28.559 suites 1 1 n/a 0 0 00:13:28.559 tests 23 23 23 0 0 00:13:28.559 asserts 152 152 152 0 n/a 00:13:28.559 00:13:28.559 Elapsed time = 0.166 seconds 00:13:28.817 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.817 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.817 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:29.076 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.076 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:29.076 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:29.076 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:29.076 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:13:29.076 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:29.076 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:13:29.076 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:29.076 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:29.076 rmmod nvme_tcp 00:13:29.076 rmmod nvme_fabrics 00:13:29.076 rmmod nvme_keyring 00:13:29.076 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:29.076 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:13:29.076 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:13:29.076 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 72192 ']' 00:13:29.076 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 72192 00:13:29.076 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 72192 ']' 00:13:29.076 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 72192 00:13:29.076 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:13:29.076 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.076 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72192 00:13:29.076 killing process with pid 72192 00:13:29.076 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:13:29.076 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:13:29.076 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72192' 00:13:29.076 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 72192 00:13:29.076 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 72192 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:13:29.644 00:13:29.644 real 0m3.582s 00:13:29.644 user 0m10.982s 00:13:29.644 sys 0m1.439s 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.644 ************************************ 00:13:29.644 END TEST nvmf_bdevio_no_huge 00:13:29.644 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:29.644 ************************************ 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:29.904 ************************************ 00:13:29.904 START TEST nvmf_tls 00:13:29.904 ************************************ 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:29.904 * Looking for test storage... 00:13:29.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:29.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.904 --rc genhtml_branch_coverage=1 00:13:29.904 --rc genhtml_function_coverage=1 00:13:29.904 --rc genhtml_legend=1 00:13:29.904 --rc geninfo_all_blocks=1 00:13:29.904 --rc geninfo_unexecuted_blocks=1 00:13:29.904 00:13:29.904 ' 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:29.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.904 --rc genhtml_branch_coverage=1 00:13:29.904 --rc genhtml_function_coverage=1 00:13:29.904 --rc genhtml_legend=1 00:13:29.904 --rc geninfo_all_blocks=1 00:13:29.904 --rc geninfo_unexecuted_blocks=1 00:13:29.904 00:13:29.904 ' 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:29.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.904 --rc genhtml_branch_coverage=1 00:13:29.904 --rc genhtml_function_coverage=1 00:13:29.904 --rc genhtml_legend=1 00:13:29.904 --rc geninfo_all_blocks=1 00:13:29.904 --rc geninfo_unexecuted_blocks=1 00:13:29.904 00:13:29.904 ' 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:29.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.904 --rc genhtml_branch_coverage=1 00:13:29.904 --rc genhtml_function_coverage=1 00:13:29.904 --rc genhtml_legend=1 00:13:29.904 --rc geninfo_all_blocks=1 00:13:29.904 --rc geninfo_unexecuted_blocks=1 00:13:29.904 00:13:29.904 ' 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.904 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.905 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.905 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.905 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.905 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:13:29.905 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:13:29.905 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.905 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.905 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:29.905 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.905 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:29.905 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:13:29.905 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.905 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.905 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.905 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:30.165 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:30.165 Cannot find device "nvmf_init_br" 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:30.165 Cannot find device "nvmf_init_br2" 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:30.165 Cannot find device "nvmf_tgt_br" 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:13:30.165 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:30.165 Cannot find device "nvmf_tgt_br2" 00:13:30.165 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:13:30.165 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:30.165 Cannot find device "nvmf_init_br" 00:13:30.165 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:13:30.165 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:30.166 Cannot find device "nvmf_init_br2" 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:30.166 Cannot find device "nvmf_tgt_br" 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:30.166 Cannot find device "nvmf_tgt_br2" 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:30.166 Cannot find device "nvmf_br" 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:30.166 Cannot find device "nvmf_init_if" 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:30.166 Cannot find device "nvmf_init_if2" 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:30.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:30.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:30.166 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:30.425 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:30.425 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:30.426 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.145 ms 00:13:30.426 00:13:30.426 --- 10.0.0.3 ping statistics --- 00:13:30.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.426 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:30.426 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:30.426 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:13:30.426 00:13:30.426 --- 10.0.0.4 ping statistics --- 00:13:30.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.426 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:30.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:13:30.426 00:13:30.426 --- 10.0.0.1 ping statistics --- 00:13:30.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.426 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:30.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:13:30.426 00:13:30.426 --- 10.0.0.2 ping statistics --- 00:13:30.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.426 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72458 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72458 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72458 ']' 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:30.426 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:30.426 [2024-12-11 13:54:23.453624] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:13:30.426 [2024-12-11 13:54:23.453740] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.685 [2024-12-11 13:54:23.606959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.685 [2024-12-11 13:54:23.663807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.685 [2024-12-11 13:54:23.663854] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.685 [2024-12-11 13:54:23.663866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.685 [2024-12-11 13:54:23.663874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.685 [2024-12-11 13:54:23.663881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.685 [2024-12-11 13:54:23.664263] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.626 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:31.626 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:31.626 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:31.626 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:31.626 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:31.627 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.627 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:13:31.627 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:31.889 true 00:13:31.889 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:31.889 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:13:32.147 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:13:32.147 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:13:32.147 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:32.405 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:32.405 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:13:32.664 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:13:32.664 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:13:32.664 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:32.923 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:32.923 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:13:33.220 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:13:33.220 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:13:33.220 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:33.220 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:13:33.787 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:13:33.787 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:13:33.787 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:33.787 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:33.787 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:13:34.046 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:13:34.046 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:13:34.046 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:34.305 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:34.305 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:13:34.564 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:13:34.564 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:13:34.564 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:34.564 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:34.564 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:34.564 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:34.564 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:13:34.564 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:34.564 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:34.824 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:34.824 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:34.824 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:34.824 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:34.824 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:34.824 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:13:34.824 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:34.824 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:34.824 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:34.824 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:34.824 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.cu2lpdmxYW 00:13:34.824 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:13:34.824 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.UHnatT9J3Z 00:13:34.824 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:34.824 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:34.824 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.cu2lpdmxYW 00:13:34.824 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.UHnatT9J3Z 00:13:34.824 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:35.084 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:35.343 [2024-12-11 13:54:28.369572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:35.602 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.cu2lpdmxYW 00:13:35.602 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.cu2lpdmxYW 00:13:35.602 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:35.861 [2024-12-11 13:54:28.665901] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.861 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:36.120 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:36.378 [2024-12-11 13:54:29.174113] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:36.378 [2024-12-11 13:54:29.174406] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:36.378 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:36.636 malloc0 00:13:36.636 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:36.895 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.cu2lpdmxYW 00:13:37.153 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:37.412 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.cu2lpdmxYW 00:13:49.622 Initializing NVMe Controllers 00:13:49.622 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:49.622 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:49.622 Initialization complete. Launching workers. 00:13:49.622 ======================================================== 00:13:49.622 Latency(us) 00:13:49.622 Device Information : IOPS MiB/s Average min max 00:13:49.622 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8039.69 31.41 7962.97 1448.38 8910.07 00:13:49.622 ======================================================== 00:13:49.622 Total : 8039.69 31.41 7962.97 1448.38 8910.07 00:13:49.622 00:13:49.622 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cu2lpdmxYW 00:13:49.622 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:49.622 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:49.622 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:49.622 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cu2lpdmxYW 00:13:49.622 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:49.622 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72702 00:13:49.622 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:49.622 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:49.622 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72702 /var/tmp/bdevperf.sock 00:13:49.622 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72702 ']' 00:13:49.622 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:49.622 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:49.622 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:49.622 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.622 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:49.622 [2024-12-11 13:54:40.587747] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:13:49.622 [2024-12-11 13:54:40.587827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72702 ] 00:13:49.622 [2024-12-11 13:54:40.726633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.622 [2024-12-11 13:54:40.779790] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.622 [2024-12-11 13:54:40.835421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:49.622 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.622 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:49.622 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cu2lpdmxYW 00:13:49.622 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:49.622 [2024-12-11 13:54:41.462793] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:49.622 TLSTESTn1 00:13:49.623 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:49.623 Running I/O for 10 seconds... 00:13:50.817 3328.00 IOPS, 13.00 MiB/s [2024-12-11T13:54:44.800Z] 3392.00 IOPS, 13.25 MiB/s [2024-12-11T13:54:45.735Z] 3413.33 IOPS, 13.33 MiB/s [2024-12-11T13:54:47.120Z] 3418.25 IOPS, 13.35 MiB/s [2024-12-11T13:54:48.054Z] 3420.60 IOPS, 13.36 MiB/s [2024-12-11T13:54:48.995Z] 3434.67 IOPS, 13.42 MiB/s [2024-12-11T13:54:49.941Z] 3433.43 IOPS, 13.41 MiB/s [2024-12-11T13:54:50.876Z] 3424.12 IOPS, 13.38 MiB/s [2024-12-11T13:54:51.812Z] 3426.00 IOPS, 13.38 MiB/s [2024-12-11T13:54:51.812Z] 3430.40 IOPS, 13.40 MiB/s 00:13:58.765 Latency(us) 00:13:58.765 [2024-12-11T13:54:51.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.765 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:58.765 Verification LBA range: start 0x0 length 0x2000 00:13:58.765 TLSTESTn1 : 10.03 3432.71 13.41 0.00 0.00 37213.99 8519.68 23592.96 00:13:58.765 [2024-12-11T13:54:51.812Z] =================================================================================================================== 00:13:58.765 [2024-12-11T13:54:51.812Z] Total : 3432.71 13.41 0.00 0.00 37213.99 8519.68 23592.96 00:13:58.765 { 00:13:58.765 "results": [ 00:13:58.765 { 00:13:58.765 "job": "TLSTESTn1", 00:13:58.765 "core_mask": "0x4", 00:13:58.765 "workload": "verify", 00:13:58.765 "status": "finished", 00:13:58.765 "verify_range": { 00:13:58.765 "start": 0, 00:13:58.765 "length": 8192 00:13:58.765 }, 00:13:58.765 "queue_depth": 128, 00:13:58.765 "io_size": 4096, 00:13:58.765 "runtime": 10.030556, 00:13:58.765 "iops": 3432.711008243212, 00:13:58.765 "mibps": 13.409027375950046, 00:13:58.765 "io_failed": 0, 00:13:58.765 "io_timeout": 0, 00:13:58.765 "avg_latency_us": 37213.994133153086, 00:13:58.765 "min_latency_us": 8519.68, 00:13:58.765 "max_latency_us": 23592.96 00:13:58.765 } 00:13:58.765 ], 00:13:58.765 "core_count": 1 00:13:58.765 } 00:13:58.765 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:58.765 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 72702 00:13:58.765 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72702 ']' 00:13:58.765 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72702 00:13:58.765 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:58.765 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.765 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72702 00:13:58.765 killing process with pid 72702 00:13:58.765 Received shutdown signal, test time was about 10.000000 seconds 00:13:58.765 00:13:58.765 Latency(us) 00:13:58.765 [2024-12-11T13:54:51.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.765 [2024-12-11T13:54:51.812Z] =================================================================================================================== 00:13:58.765 [2024-12-11T13:54:51.812Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:58.765 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:58.765 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:58.765 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72702' 00:13:58.765 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72702 00:13:58.765 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72702 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UHnatT9J3Z 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UHnatT9J3Z 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UHnatT9J3Z 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UHnatT9J3Z 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72829 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72829 /var/tmp/bdevperf.sock 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72829 ']' 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:59.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.024 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:59.024 [2024-12-11 13:54:52.032934] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:13:59.024 [2024-12-11 13:54:52.033037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72829 ] 00:13:59.284 [2024-12-11 13:54:52.182529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.284 [2024-12-11 13:54:52.243644] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.284 [2024-12-11 13:54:52.298984] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:59.542 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.542 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:59.542 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UHnatT9J3Z 00:13:59.800 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:00.059 [2024-12-11 13:54:52.897994] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:00.059 [2024-12-11 13:54:52.904714] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:00.059 [2024-12-11 13:54:52.904887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2360150 (107): Transport endpoint is not connected 00:14:00.059 [2024-12-11 13:54:52.905878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2360150 (9): Bad file descriptor 00:14:00.059 [2024-12-11 13:54:52.906875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:00.059 [2024-12-11 13:54:52.906920] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:00.059 [2024-12-11 13:54:52.906932] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:00.059 [2024-12-11 13:54:52.906949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:00.059 request: 00:14:00.059 { 00:14:00.059 "name": "TLSTEST", 00:14:00.059 "trtype": "tcp", 00:14:00.059 "traddr": "10.0.0.3", 00:14:00.059 "adrfam": "ipv4", 00:14:00.059 "trsvcid": "4420", 00:14:00.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:00.059 "prchk_reftag": false, 00:14:00.059 "prchk_guard": false, 00:14:00.059 "hdgst": false, 00:14:00.059 "ddgst": false, 00:14:00.059 "psk": "key0", 00:14:00.059 "allow_unrecognized_csi": false, 00:14:00.059 "method": "bdev_nvme_attach_controller", 00:14:00.059 "req_id": 1 00:14:00.059 } 00:14:00.059 Got JSON-RPC error response 00:14:00.059 response: 00:14:00.059 { 00:14:00.059 "code": -5, 00:14:00.059 "message": "Input/output error" 00:14:00.059 } 00:14:00.059 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72829 00:14:00.059 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72829 ']' 00:14:00.059 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72829 00:14:00.059 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:00.059 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:00.059 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72829 00:14:00.059 killing process with pid 72829 00:14:00.059 Received shutdown signal, test time was about 10.000000 seconds 00:14:00.059 00:14:00.059 Latency(us) 00:14:00.059 [2024-12-11T13:54:53.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.059 [2024-12-11T13:54:53.106Z] =================================================================================================================== 00:14:00.059 [2024-12-11T13:54:53.106Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:00.059 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:00.059 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:00.059 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72829' 00:14:00.059 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72829 00:14:00.059 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72829 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cu2lpdmxYW 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cu2lpdmxYW 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cu2lpdmxYW 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cu2lpdmxYW 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72850 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:00.316 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72850 /var/tmp/bdevperf.sock 00:14:00.317 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72850 ']' 00:14:00.317 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:00.317 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:00.317 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:00.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:00.317 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:00.317 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:00.317 [2024-12-11 13:54:53.218352] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:00.317 [2024-12-11 13:54:53.218460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72850 ] 00:14:00.317 [2024-12-11 13:54:53.360441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.575 [2024-12-11 13:54:53.418227] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.575 [2024-12-11 13:54:53.474708] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:01.510 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:01.510 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:01.510 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cu2lpdmxYW 00:14:01.510 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:01.799 [2024-12-11 13:54:54.772019] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:01.799 [2024-12-11 13:54:54.777280] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:01.799 [2024-12-11 13:54:54.777357] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:01.799 [2024-12-11 13:54:54.777469] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:01.799 [2024-12-11 13:54:54.777998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1349150 (107): Transport endpoint is not connected 00:14:01.799 [2024-12-11 13:54:54.778980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1349150 (9): Bad file descriptor 00:14:01.799 [2024-12-11 13:54:54.779976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:01.799 [2024-12-11 13:54:54.780022] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:01.799 [2024-12-11 13:54:54.780034] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:01.799 [2024-12-11 13:54:54.780050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:01.799 request: 00:14:01.799 { 00:14:01.799 "name": "TLSTEST", 00:14:01.799 "trtype": "tcp", 00:14:01.799 "traddr": "10.0.0.3", 00:14:01.799 "adrfam": "ipv4", 00:14:01.799 "trsvcid": "4420", 00:14:01.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.799 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:01.799 "prchk_reftag": false, 00:14:01.799 "prchk_guard": false, 00:14:01.799 "hdgst": false, 00:14:01.799 "ddgst": false, 00:14:01.799 "psk": "key0", 00:14:01.799 "allow_unrecognized_csi": false, 00:14:01.799 "method": "bdev_nvme_attach_controller", 00:14:01.799 "req_id": 1 00:14:01.799 } 00:14:01.799 Got JSON-RPC error response 00:14:01.799 response: 00:14:01.799 { 00:14:01.799 "code": -5, 00:14:01.799 "message": "Input/output error" 00:14:01.799 } 00:14:01.799 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72850 00:14:01.799 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72850 ']' 00:14:01.799 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72850 00:14:01.799 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:01.799 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.799 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72850 00:14:01.799 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:01.799 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:01.799 killing process with pid 72850 00:14:01.799 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72850' 00:14:01.799 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72850 00:14:01.800 Received shutdown signal, test time was about 10.000000 seconds 00:14:01.800 00:14:01.800 Latency(us) 00:14:01.800 [2024-12-11T13:54:54.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.800 [2024-12-11T13:54:54.847Z] =================================================================================================================== 00:14:01.800 [2024-12-11T13:54:54.847Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:01.800 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72850 00:14:02.063 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:02.063 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:02.063 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:02.063 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:02.063 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:02.063 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cu2lpdmxYW 00:14:02.063 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:02.063 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cu2lpdmxYW 00:14:02.063 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:02.063 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.063 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:02.063 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.063 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cu2lpdmxYW 00:14:02.063 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:02.063 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:02.063 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:02.063 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cu2lpdmxYW 00:14:02.064 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:02.064 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72884 00:14:02.064 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:02.064 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:02.064 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72884 /var/tmp/bdevperf.sock 00:14:02.064 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72884 ']' 00:14:02.064 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:02.064 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:02.064 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:02.064 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.064 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.064 [2024-12-11 13:54:55.086413] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:02.064 [2024-12-11 13:54:55.086558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72884 ] 00:14:02.321 [2024-12-11 13:54:55.230873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.321 [2024-12-11 13:54:55.286413] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.321 [2024-12-11 13:54:55.344086] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:02.578 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.578 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:02.578 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cu2lpdmxYW 00:14:02.840 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:03.100 [2024-12-11 13:54:55.940986] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:03.100 [2024-12-11 13:54:55.946276] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:03.100 [2024-12-11 13:54:55.946315] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:03.100 [2024-12-11 13:54:55.946364] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:03.100 [2024-12-11 13:54:55.947010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2305150 (107): Transport endpoint is not connected 00:14:03.100 [2024-12-11 13:54:55.947998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2305150 (9): Bad file descriptor 00:14:03.100 [2024-12-11 13:54:55.948995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:14:03.100 [2024-12-11 13:54:55.949015] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:03.100 [2024-12-11 13:54:55.949026] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:03.100 [2024-12-11 13:54:55.949041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:14:03.100 request: 00:14:03.100 { 00:14:03.100 "name": "TLSTEST", 00:14:03.100 "trtype": "tcp", 00:14:03.100 "traddr": "10.0.0.3", 00:14:03.100 "adrfam": "ipv4", 00:14:03.100 "trsvcid": "4420", 00:14:03.100 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:03.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:03.100 "prchk_reftag": false, 00:14:03.100 "prchk_guard": false, 00:14:03.100 "hdgst": false, 00:14:03.100 "ddgst": false, 00:14:03.100 "psk": "key0", 00:14:03.100 "allow_unrecognized_csi": false, 00:14:03.100 "method": "bdev_nvme_attach_controller", 00:14:03.100 "req_id": 1 00:14:03.100 } 00:14:03.100 Got JSON-RPC error response 00:14:03.100 response: 00:14:03.100 { 00:14:03.100 "code": -5, 00:14:03.100 "message": "Input/output error" 00:14:03.100 } 00:14:03.100 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72884 00:14:03.100 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72884 ']' 00:14:03.100 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72884 00:14:03.100 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:03.100 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:03.100 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72884 00:14:03.100 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:03.100 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:03.100 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72884' 00:14:03.100 killing process with pid 72884 00:14:03.100 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72884 00:14:03.100 Received shutdown signal, test time was about 10.000000 seconds 00:14:03.100 00:14:03.100 Latency(us) 00:14:03.100 [2024-12-11T13:54:56.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.100 [2024-12-11T13:54:56.147Z] =================================================================================================================== 00:14:03.100 [2024-12-11T13:54:56.147Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:03.100 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72884 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72905 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72905 /var/tmp/bdevperf.sock 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72905 ']' 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:03.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.359 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:03.359 [2024-12-11 13:54:56.270008] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:03.359 [2024-12-11 13:54:56.270377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72905 ] 00:14:03.618 [2024-12-11 13:54:56.425119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.618 [2024-12-11 13:54:56.493767] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.618 [2024-12-11 13:54:56.555494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:04.552 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.552 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:04.552 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:04.552 [2024-12-11 13:54:57.517387] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:04.552 [2024-12-11 13:54:57.517816] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:04.552 request: 00:14:04.552 { 00:14:04.552 "name": "key0", 00:14:04.552 "path": "", 00:14:04.552 "method": "keyring_file_add_key", 00:14:04.552 "req_id": 1 00:14:04.552 } 00:14:04.552 Got JSON-RPC error response 00:14:04.552 response: 00:14:04.552 { 00:14:04.552 "code": -1, 00:14:04.552 "message": "Operation not permitted" 00:14:04.552 } 00:14:04.552 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:04.811 [2024-12-11 13:54:57.825574] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:04.811 [2024-12-11 13:54:57.825680] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:04.811 request: 00:14:04.811 { 00:14:04.811 "name": "TLSTEST", 00:14:04.811 "trtype": "tcp", 00:14:04.811 "traddr": "10.0.0.3", 00:14:04.811 "adrfam": "ipv4", 00:14:04.811 "trsvcid": "4420", 00:14:04.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.811 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:04.811 "prchk_reftag": false, 00:14:04.811 "prchk_guard": false, 00:14:04.811 "hdgst": false, 00:14:04.811 "ddgst": false, 00:14:04.811 "psk": "key0", 00:14:04.811 "allow_unrecognized_csi": false, 00:14:04.811 "method": "bdev_nvme_attach_controller", 00:14:04.811 "req_id": 1 00:14:04.811 } 00:14:04.811 Got JSON-RPC error response 00:14:04.811 response: 00:14:04.811 { 00:14:04.811 "code": -126, 00:14:04.811 "message": "Required key not available" 00:14:04.811 } 00:14:04.811 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72905 00:14:04.811 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72905 ']' 00:14:04.811 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72905 00:14:04.811 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:04.811 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:04.811 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72905 00:14:05.070 killing process with pid 72905 00:14:05.070 Received shutdown signal, test time was about 10.000000 seconds 00:14:05.070 00:14:05.070 Latency(us) 00:14:05.070 [2024-12-11T13:54:58.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.070 [2024-12-11T13:54:58.117Z] =================================================================================================================== 00:14:05.070 [2024-12-11T13:54:58.117Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:05.070 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:05.070 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:05.070 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72905' 00:14:05.070 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72905 00:14:05.070 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72905 00:14:05.070 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:05.070 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:05.070 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:05.070 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:05.070 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:05.070 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 72458 00:14:05.070 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72458 ']' 00:14:05.070 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72458 00:14:05.070 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:05.070 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:05.070 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72458 00:14:05.070 killing process with pid 72458 00:14:05.070 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:05.070 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:05.070 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72458' 00:14:05.070 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72458 00:14:05.070 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72458 00:14:05.328 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.oO4tMaUNe5 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.oO4tMaUNe5 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72948 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72948 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72948 ']' 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:05.329 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.587 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:05.587 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.587 [2024-12-11 13:54:58.437757] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:05.587 [2024-12-11 13:54:58.437861] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.587 [2024-12-11 13:54:58.588362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.846 [2024-12-11 13:54:58.650453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.846 [2024-12-11 13:54:58.650516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.846 [2024-12-11 13:54:58.650527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.846 [2024-12-11 13:54:58.650536] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.846 [2024-12-11 13:54:58.650543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.846 [2024-12-11 13:54:58.650957] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.846 [2024-12-11 13:54:58.709407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:05.846 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:05.846 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:05.846 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:05.846 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:05.846 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.846 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.846 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.oO4tMaUNe5 00:14:05.846 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oO4tMaUNe5 00:14:05.846 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:06.105 [2024-12-11 13:54:59.058498] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.105 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:06.364 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:06.635 [2024-12-11 13:54:59.602593] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:06.635 [2024-12-11 13:54:59.602933] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:06.635 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:06.908 malloc0 00:14:06.908 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:07.166 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oO4tMaUNe5 00:14:07.424 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:07.683 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oO4tMaUNe5 00:14:07.683 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:07.683 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:07.683 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:07.683 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.oO4tMaUNe5 00:14:07.683 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:07.683 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:07.683 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73002 00:14:07.683 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:07.683 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73002 /var/tmp/bdevperf.sock 00:14:07.683 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73002 ']' 00:14:07.683 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:07.683 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:07.683 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:07.683 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.683 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.683 [2024-12-11 13:55:00.641407] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:07.683 [2024-12-11 13:55:00.641488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73002 ] 00:14:07.942 [2024-12-11 13:55:00.786244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.942 [2024-12-11 13:55:00.846027] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.942 [2024-12-11 13:55:00.902129] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:07.942 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:07.942 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:07.942 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oO4tMaUNe5 00:14:08.200 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:08.459 [2024-12-11 13:55:01.445935] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:08.717 TLSTESTn1 00:14:08.717 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:08.717 Running I/O for 10 seconds... 00:14:11.032 4093.00 IOPS, 15.99 MiB/s [2024-12-11T13:55:04.651Z] 4159.50 IOPS, 16.25 MiB/s [2024-12-11T13:55:06.037Z] 4169.33 IOPS, 16.29 MiB/s [2024-12-11T13:55:06.972Z] 4188.50 IOPS, 16.36 MiB/s [2024-12-11T13:55:07.914Z] 4133.60 IOPS, 16.15 MiB/s [2024-12-11T13:55:08.850Z] 4008.67 IOPS, 15.66 MiB/s [2024-12-11T13:55:09.826Z] 3974.86 IOPS, 15.53 MiB/s [2024-12-11T13:55:10.763Z] 3956.12 IOPS, 15.45 MiB/s [2024-12-11T13:55:11.698Z] 3941.78 IOPS, 15.40 MiB/s [2024-12-11T13:55:11.698Z] 3929.40 IOPS, 15.35 MiB/s 00:14:18.651 Latency(us) 00:14:18.651 [2024-12-11T13:55:11.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.651 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:18.651 Verification LBA range: start 0x0 length 0x2000 00:14:18.651 TLSTESTn1 : 10.02 3934.91 15.37 0.00 0.00 32468.40 6404.65 29193.31 00:14:18.651 [2024-12-11T13:55:11.698Z] =================================================================================================================== 00:14:18.651 [2024-12-11T13:55:11.698Z] Total : 3934.91 15.37 0.00 0.00 32468.40 6404.65 29193.31 00:14:18.651 { 00:14:18.651 "results": [ 00:14:18.651 { 00:14:18.651 "job": "TLSTESTn1", 00:14:18.651 "core_mask": "0x4", 00:14:18.651 "workload": "verify", 00:14:18.651 "status": "finished", 00:14:18.651 "verify_range": { 00:14:18.651 "start": 0, 00:14:18.651 "length": 8192 00:14:18.651 }, 00:14:18.651 "queue_depth": 128, 00:14:18.651 "io_size": 4096, 00:14:18.651 "runtime": 10.018276, 00:14:18.651 "iops": 3934.9085611137084, 00:14:18.651 "mibps": 15.370736566850423, 00:14:18.651 "io_failed": 0, 00:14:18.651 "io_timeout": 0, 00:14:18.651 "avg_latency_us": 32468.403553804965, 00:14:18.651 "min_latency_us": 6404.654545454546, 00:14:18.651 "max_latency_us": 29193.30909090909 00:14:18.651 } 00:14:18.651 ], 00:14:18.651 "core_count": 1 00:14:18.651 } 00:14:18.651 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:18.651 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 73002 00:14:18.651 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73002 ']' 00:14:18.651 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73002 00:14:18.651 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:18.651 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:18.651 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73002 00:14:18.909 killing process with pid 73002 00:14:18.909 Received shutdown signal, test time was about 10.000000 seconds 00:14:18.909 00:14:18.909 Latency(us) 00:14:18.909 [2024-12-11T13:55:11.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.909 [2024-12-11T13:55:11.956Z] =================================================================================================================== 00:14:18.909 [2024-12-11T13:55:11.956Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:18.909 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:18.909 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:18.909 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73002' 00:14:18.909 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73002 00:14:18.909 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73002 00:14:18.909 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.oO4tMaUNe5 00:14:18.909 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oO4tMaUNe5 00:14:18.909 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:18.910 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oO4tMaUNe5 00:14:18.910 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:18.910 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.910 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:18.910 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.910 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oO4tMaUNe5 00:14:18.910 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:18.910 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:18.910 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:18.910 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.oO4tMaUNe5 00:14:18.910 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:18.910 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73128 00:14:18.910 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:18.910 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:18.910 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73128 /var/tmp/bdevperf.sock 00:14:18.910 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73128 ']' 00:14:18.910 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:18.910 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.910 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:18.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:18.910 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.910 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:19.168 [2024-12-11 13:55:11.988175] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:19.168 [2024-12-11 13:55:11.988583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73128 ] 00:14:19.168 [2024-12-11 13:55:12.132310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.168 [2024-12-11 13:55:12.199266] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.427 [2024-12-11 13:55:12.258770] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:19.427 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.427 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:19.427 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oO4tMaUNe5 00:14:19.685 [2024-12-11 13:55:12.607557] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.oO4tMaUNe5': 0100666 00:14:19.685 [2024-12-11 13:55:12.608032] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:19.685 request: 00:14:19.685 { 00:14:19.685 "name": "key0", 00:14:19.685 "path": "/tmp/tmp.oO4tMaUNe5", 00:14:19.685 "method": "keyring_file_add_key", 00:14:19.685 "req_id": 1 00:14:19.685 } 00:14:19.685 Got JSON-RPC error response 00:14:19.685 response: 00:14:19.685 { 00:14:19.685 "code": -1, 00:14:19.685 "message": "Operation not permitted" 00:14:19.685 } 00:14:19.685 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:19.943 [2024-12-11 13:55:12.935825] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:19.943 [2024-12-11 13:55:12.935925] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:19.943 request: 00:14:19.943 { 00:14:19.943 "name": "TLSTEST", 00:14:19.943 "trtype": "tcp", 00:14:19.944 "traddr": "10.0.0.3", 00:14:19.944 "adrfam": "ipv4", 00:14:19.944 "trsvcid": "4420", 00:14:19.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:19.944 "prchk_reftag": false, 00:14:19.944 "prchk_guard": false, 00:14:19.944 "hdgst": false, 00:14:19.944 "ddgst": false, 00:14:19.944 "psk": "key0", 00:14:19.944 "allow_unrecognized_csi": false, 00:14:19.944 "method": "bdev_nvme_attach_controller", 00:14:19.944 "req_id": 1 00:14:19.944 } 00:14:19.944 Got JSON-RPC error response 00:14:19.944 response: 00:14:19.944 { 00:14:19.944 "code": -126, 00:14:19.944 "message": "Required key not available" 00:14:19.944 } 00:14:19.944 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 73128 00:14:19.944 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73128 ']' 00:14:19.944 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73128 00:14:19.944 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:19.944 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.944 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73128 00:14:20.203 killing process with pid 73128 00:14:20.203 Received shutdown signal, test time was about 10.000000 seconds 00:14:20.203 00:14:20.203 Latency(us) 00:14:20.203 [2024-12-11T13:55:13.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.203 [2024-12-11T13:55:13.250Z] =================================================================================================================== 00:14:20.203 [2024-12-11T13:55:13.250Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:20.203 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:20.203 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:20.203 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73128' 00:14:20.203 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73128 00:14:20.203 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73128 00:14:20.203 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:20.203 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:20.203 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:20.203 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:20.203 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:20.203 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 72948 00:14:20.203 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72948 ']' 00:14:20.203 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72948 00:14:20.203 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:20.203 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:20.203 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72948 00:14:20.203 killing process with pid 72948 00:14:20.203 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:20.203 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:20.203 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72948' 00:14:20.203 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72948 00:14:20.203 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72948 00:14:20.770 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:14:20.770 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:20.770 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:20.770 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.770 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=73159 00:14:20.770 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:20.770 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 73159 00:14:20.770 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73159 ']' 00:14:20.770 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.770 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:20.770 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.770 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:20.770 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.770 [2024-12-11 13:55:13.596878] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:20.770 [2024-12-11 13:55:13.597831] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.770 [2024-12-11 13:55:13.754031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.028 [2024-12-11 13:55:13.827633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.028 [2024-12-11 13:55:13.827753] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.028 [2024-12-11 13:55:13.827781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.028 [2024-12-11 13:55:13.827793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.028 [2024-12-11 13:55:13.827802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.028 [2024-12-11 13:55:13.828306] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.028 [2024-12-11 13:55:13.891604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:21.596 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:21.596 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:21.596 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:21.596 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:21.596 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.596 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.596 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.oO4tMaUNe5 00:14:21.596 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:21.596 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.oO4tMaUNe5 00:14:21.596 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:14:21.596 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.596 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:14:21.596 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.596 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.oO4tMaUNe5 00:14:21.596 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oO4tMaUNe5 00:14:21.596 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:21.854 [2024-12-11 13:55:14.889003] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.176 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:22.434 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:22.691 [2024-12-11 13:55:15.481166] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:22.692 [2024-12-11 13:55:15.481474] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:22.692 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:22.950 malloc0 00:14:22.950 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:23.207 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oO4tMaUNe5 00:14:23.466 [2024-12-11 13:55:16.325414] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.oO4tMaUNe5': 0100666 00:14:23.466 [2024-12-11 13:55:16.325470] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:23.466 request: 00:14:23.466 { 00:14:23.466 "name": "key0", 00:14:23.466 "path": "/tmp/tmp.oO4tMaUNe5", 00:14:23.466 "method": "keyring_file_add_key", 00:14:23.466 "req_id": 1 00:14:23.466 } 00:14:23.466 Got JSON-RPC error response 00:14:23.466 response: 00:14:23.466 { 00:14:23.466 "code": -1, 00:14:23.466 "message": "Operation not permitted" 00:14:23.466 } 00:14:23.466 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:23.725 [2024-12-11 13:55:16.601520] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:14:23.725 [2024-12-11 13:55:16.601623] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:23.725 request: 00:14:23.725 { 00:14:23.725 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:23.725 "host": "nqn.2016-06.io.spdk:host1", 00:14:23.725 "psk": "key0", 00:14:23.725 "method": "nvmf_subsystem_add_host", 00:14:23.725 "req_id": 1 00:14:23.725 } 00:14:23.725 Got JSON-RPC error response 00:14:23.725 response: 00:14:23.725 { 00:14:23.725 "code": -32603, 00:14:23.725 "message": "Internal error" 00:14:23.725 } 00:14:23.725 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:23.725 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:23.725 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:23.725 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:23.725 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 73159 00:14:23.725 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73159 ']' 00:14:23.725 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73159 00:14:23.725 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:23.725 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:23.725 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73159 00:14:23.725 killing process with pid 73159 00:14:23.725 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:23.725 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:23.725 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73159' 00:14:23.725 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73159 00:14:23.725 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73159 00:14:23.984 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.oO4tMaUNe5 00:14:23.984 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:14:23.984 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:23.984 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:23.984 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.984 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=73234 00:14:23.984 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:23.984 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 73234 00:14:23.984 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73234 ']' 00:14:23.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.984 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.984 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:23.984 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.984 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:23.984 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.984 [2024-12-11 13:55:17.007145] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:23.984 [2024-12-11 13:55:17.007249] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.243 [2024-12-11 13:55:17.161962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.243 [2024-12-11 13:55:17.234625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.243 [2024-12-11 13:55:17.234744] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.243 [2024-12-11 13:55:17.234773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.243 [2024-12-11 13:55:17.234783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.243 [2024-12-11 13:55:17.234792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.243 [2024-12-11 13:55:17.235319] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.501 [2024-12-11 13:55:17.296414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:25.068 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:25.068 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:25.068 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:25.068 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:25.068 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:25.068 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.068 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.oO4tMaUNe5 00:14:25.068 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oO4tMaUNe5 00:14:25.068 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:25.326 [2024-12-11 13:55:18.332500] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.326 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:25.585 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:25.843 [2024-12-11 13:55:18.868624] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:25.843 [2024-12-11 13:55:18.869271] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:26.101 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:26.101 malloc0 00:14:26.360 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:26.618 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oO4tMaUNe5 00:14:26.877 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:27.135 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:27.135 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=73290 00:14:27.135 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:27.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:27.135 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 73290 /var/tmp/bdevperf.sock 00:14:27.135 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73290 ']' 00:14:27.135 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:27.135 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.135 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:27.135 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.135 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:27.135 [2024-12-11 13:55:20.097574] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:27.135 [2024-12-11 13:55:20.098123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73290 ] 00:14:27.395 [2024-12-11 13:55:20.252825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.395 [2024-12-11 13:55:20.322108] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.395 [2024-12-11 13:55:20.382560] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:28.328 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.328 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:28.328 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oO4tMaUNe5 00:14:28.328 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:28.587 [2024-12-11 13:55:21.611891] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:28.845 TLSTESTn1 00:14:28.845 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:29.104 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:14:29.104 "subsystems": [ 00:14:29.104 { 00:14:29.104 "subsystem": "keyring", 00:14:29.104 "config": [ 00:14:29.104 { 00:14:29.104 "method": "keyring_file_add_key", 00:14:29.104 "params": { 00:14:29.104 "name": "key0", 00:14:29.104 "path": "/tmp/tmp.oO4tMaUNe5" 00:14:29.104 } 00:14:29.104 } 00:14:29.104 ] 00:14:29.104 }, 00:14:29.104 { 00:14:29.104 "subsystem": "iobuf", 00:14:29.104 "config": [ 00:14:29.104 { 00:14:29.104 "method": "iobuf_set_options", 00:14:29.104 "params": { 00:14:29.104 "small_pool_count": 8192, 00:14:29.104 "large_pool_count": 1024, 00:14:29.104 "small_bufsize": 8192, 00:14:29.104 "large_bufsize": 135168, 00:14:29.104 "enable_numa": false 00:14:29.104 } 00:14:29.104 } 00:14:29.104 ] 00:14:29.104 }, 00:14:29.104 { 00:14:29.104 "subsystem": "sock", 00:14:29.104 "config": [ 00:14:29.104 { 00:14:29.104 "method": "sock_set_default_impl", 00:14:29.104 "params": { 00:14:29.104 "impl_name": "uring" 00:14:29.104 } 00:14:29.104 }, 00:14:29.104 { 00:14:29.104 "method": "sock_impl_set_options", 00:14:29.104 "params": { 00:14:29.104 "impl_name": "ssl", 00:14:29.104 "recv_buf_size": 4096, 00:14:29.104 "send_buf_size": 4096, 00:14:29.104 "enable_recv_pipe": true, 00:14:29.104 "enable_quickack": false, 00:14:29.104 "enable_placement_id": 0, 00:14:29.104 "enable_zerocopy_send_server": true, 00:14:29.104 "enable_zerocopy_send_client": false, 00:14:29.104 "zerocopy_threshold": 0, 00:14:29.104 "tls_version": 0, 00:14:29.104 "enable_ktls": false 00:14:29.104 } 00:14:29.104 }, 00:14:29.104 { 00:14:29.104 "method": "sock_impl_set_options", 00:14:29.104 "params": { 00:14:29.104 "impl_name": "posix", 00:14:29.104 "recv_buf_size": 2097152, 00:14:29.104 "send_buf_size": 2097152, 00:14:29.104 "enable_recv_pipe": true, 00:14:29.104 "enable_quickack": false, 00:14:29.104 "enable_placement_id": 0, 00:14:29.104 "enable_zerocopy_send_server": true, 00:14:29.104 "enable_zerocopy_send_client": false, 00:14:29.104 "zerocopy_threshold": 0, 00:14:29.104 "tls_version": 0, 00:14:29.104 "enable_ktls": false 00:14:29.104 } 00:14:29.104 }, 00:14:29.104 { 00:14:29.104 "method": "sock_impl_set_options", 00:14:29.104 "params": { 00:14:29.104 "impl_name": "uring", 00:14:29.104 "recv_buf_size": 2097152, 00:14:29.104 "send_buf_size": 2097152, 00:14:29.104 "enable_recv_pipe": true, 00:14:29.104 "enable_quickack": false, 00:14:29.104 "enable_placement_id": 0, 00:14:29.104 "enable_zerocopy_send_server": false, 00:14:29.104 "enable_zerocopy_send_client": false, 00:14:29.104 "zerocopy_threshold": 0, 00:14:29.104 "tls_version": 0, 00:14:29.104 "enable_ktls": false 00:14:29.104 } 00:14:29.104 } 00:14:29.104 ] 00:14:29.104 }, 00:14:29.104 { 00:14:29.104 "subsystem": "vmd", 00:14:29.104 "config": [] 00:14:29.104 }, 00:14:29.104 { 00:14:29.104 "subsystem": "accel", 00:14:29.104 "config": [ 00:14:29.104 { 00:14:29.104 "method": "accel_set_options", 00:14:29.104 "params": { 00:14:29.104 "small_cache_size": 128, 00:14:29.104 "large_cache_size": 16, 00:14:29.104 "task_count": 2048, 00:14:29.104 "sequence_count": 2048, 00:14:29.104 "buf_count": 2048 00:14:29.104 } 00:14:29.104 } 00:14:29.104 ] 00:14:29.104 }, 00:14:29.104 { 00:14:29.104 "subsystem": "bdev", 00:14:29.104 "config": [ 00:14:29.104 { 00:14:29.104 "method": "bdev_set_options", 00:14:29.104 "params": { 00:14:29.104 "bdev_io_pool_size": 65535, 00:14:29.104 "bdev_io_cache_size": 256, 00:14:29.104 "bdev_auto_examine": true, 00:14:29.104 "iobuf_small_cache_size": 128, 00:14:29.104 "iobuf_large_cache_size": 16 00:14:29.104 } 00:14:29.104 }, 00:14:29.104 { 00:14:29.104 "method": "bdev_raid_set_options", 00:14:29.104 "params": { 00:14:29.104 "process_window_size_kb": 1024, 00:14:29.104 "process_max_bandwidth_mb_sec": 0 00:14:29.104 } 00:14:29.104 }, 00:14:29.104 { 00:14:29.104 "method": "bdev_iscsi_set_options", 00:14:29.104 "params": { 00:14:29.104 "timeout_sec": 30 00:14:29.104 } 00:14:29.104 }, 00:14:29.104 { 00:14:29.105 "method": "bdev_nvme_set_options", 00:14:29.105 "params": { 00:14:29.105 "action_on_timeout": "none", 00:14:29.105 "timeout_us": 0, 00:14:29.105 "timeout_admin_us": 0, 00:14:29.105 "keep_alive_timeout_ms": 10000, 00:14:29.105 "arbitration_burst": 0, 00:14:29.105 "low_priority_weight": 0, 00:14:29.105 "medium_priority_weight": 0, 00:14:29.105 "high_priority_weight": 0, 00:14:29.105 "nvme_adminq_poll_period_us": 10000, 00:14:29.105 "nvme_ioq_poll_period_us": 0, 00:14:29.105 "io_queue_requests": 0, 00:14:29.105 "delay_cmd_submit": true, 00:14:29.105 "transport_retry_count": 4, 00:14:29.105 "bdev_retry_count": 3, 00:14:29.105 "transport_ack_timeout": 0, 00:14:29.105 "ctrlr_loss_timeout_sec": 0, 00:14:29.105 "reconnect_delay_sec": 0, 00:14:29.105 "fast_io_fail_timeout_sec": 0, 00:14:29.105 "disable_auto_failback": false, 00:14:29.105 "generate_uuids": false, 00:14:29.105 "transport_tos": 0, 00:14:29.105 "nvme_error_stat": false, 00:14:29.105 "rdma_srq_size": 0, 00:14:29.105 "io_path_stat": false, 00:14:29.105 "allow_accel_sequence": false, 00:14:29.105 "rdma_max_cq_size": 0, 00:14:29.105 "rdma_cm_event_timeout_ms": 0, 00:14:29.105 "dhchap_digests": [ 00:14:29.105 "sha256", 00:14:29.105 "sha384", 00:14:29.105 "sha512" 00:14:29.105 ], 00:14:29.105 "dhchap_dhgroups": [ 00:14:29.105 "null", 00:14:29.105 "ffdhe2048", 00:14:29.105 "ffdhe3072", 00:14:29.105 "ffdhe4096", 00:14:29.105 "ffdhe6144", 00:14:29.105 "ffdhe8192" 00:14:29.105 ], 00:14:29.105 "rdma_umr_per_io": false 00:14:29.105 } 00:14:29.105 }, 00:14:29.105 { 00:14:29.105 "method": "bdev_nvme_set_hotplug", 00:14:29.105 "params": { 00:14:29.105 "period_us": 100000, 00:14:29.105 "enable": false 00:14:29.105 } 00:14:29.105 }, 00:14:29.105 { 00:14:29.105 "method": "bdev_malloc_create", 00:14:29.105 "params": { 00:14:29.105 "name": "malloc0", 00:14:29.105 "num_blocks": 8192, 00:14:29.105 "block_size": 4096, 00:14:29.105 "physical_block_size": 4096, 00:14:29.105 "uuid": "4cde7c4d-141a-4111-bd68-038d79d52bf5", 00:14:29.105 "optimal_io_boundary": 0, 00:14:29.105 "md_size": 0, 00:14:29.105 "dif_type": 0, 00:14:29.105 "dif_is_head_of_md": false, 00:14:29.105 "dif_pi_format": 0 00:14:29.105 } 00:14:29.105 }, 00:14:29.105 { 00:14:29.105 "method": "bdev_wait_for_examine" 00:14:29.105 } 00:14:29.105 ] 00:14:29.105 }, 00:14:29.105 { 00:14:29.105 "subsystem": "nbd", 00:14:29.105 "config": [] 00:14:29.105 }, 00:14:29.105 { 00:14:29.105 "subsystem": "scheduler", 00:14:29.105 "config": [ 00:14:29.105 { 00:14:29.105 "method": "framework_set_scheduler", 00:14:29.105 "params": { 00:14:29.105 "name": "static" 00:14:29.105 } 00:14:29.105 } 00:14:29.105 ] 00:14:29.105 }, 00:14:29.105 { 00:14:29.105 "subsystem": "nvmf", 00:14:29.105 "config": [ 00:14:29.105 { 00:14:29.105 "method": "nvmf_set_config", 00:14:29.105 "params": { 00:14:29.105 "discovery_filter": "match_any", 00:14:29.105 "admin_cmd_passthru": { 00:14:29.105 "identify_ctrlr": false 00:14:29.105 }, 00:14:29.105 "dhchap_digests": [ 00:14:29.105 "sha256", 00:14:29.105 "sha384", 00:14:29.105 "sha512" 00:14:29.105 ], 00:14:29.105 "dhchap_dhgroups": [ 00:14:29.105 "null", 00:14:29.105 "ffdhe2048", 00:14:29.105 "ffdhe3072", 00:14:29.105 "ffdhe4096", 00:14:29.105 "ffdhe6144", 00:14:29.105 "ffdhe8192" 00:14:29.105 ] 00:14:29.105 } 00:14:29.105 }, 00:14:29.105 { 00:14:29.105 "method": "nvmf_set_max_subsystems", 00:14:29.105 "params": { 00:14:29.105 "max_subsystems": 1024 00:14:29.105 } 00:14:29.105 }, 00:14:29.105 { 00:14:29.105 "method": "nvmf_set_crdt", 00:14:29.105 "params": { 00:14:29.105 "crdt1": 0, 00:14:29.105 "crdt2": 0, 00:14:29.105 "crdt3": 0 00:14:29.105 } 00:14:29.105 }, 00:14:29.105 { 00:14:29.105 "method": "nvmf_create_transport", 00:14:29.105 "params": { 00:14:29.105 "trtype": "TCP", 00:14:29.105 "max_queue_depth": 128, 00:14:29.105 "max_io_qpairs_per_ctrlr": 127, 00:14:29.105 "in_capsule_data_size": 4096, 00:14:29.105 "max_io_size": 131072, 00:14:29.105 "io_unit_size": 131072, 00:14:29.105 "max_aq_depth": 128, 00:14:29.105 "num_shared_buffers": 511, 00:14:29.105 "buf_cache_size": 4294967295, 00:14:29.105 "dif_insert_or_strip": false, 00:14:29.105 "zcopy": false, 00:14:29.105 "c2h_success": false, 00:14:29.105 "sock_priority": 0, 00:14:29.105 "abort_timeout_sec": 1, 00:14:29.105 "ack_timeout": 0, 00:14:29.105 "data_wr_pool_size": 0 00:14:29.105 } 00:14:29.105 }, 00:14:29.105 { 00:14:29.105 "method": "nvmf_create_subsystem", 00:14:29.105 "params": { 00:14:29.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.105 "allow_any_host": false, 00:14:29.105 "serial_number": "SPDK00000000000001", 00:14:29.105 "model_number": "SPDK bdev Controller", 00:14:29.105 "max_namespaces": 10, 00:14:29.105 "min_cntlid": 1, 00:14:29.105 "max_cntlid": 65519, 00:14:29.105 "ana_reporting": false 00:14:29.105 } 00:14:29.105 }, 00:14:29.105 { 00:14:29.105 "method": "nvmf_subsystem_add_host", 00:14:29.105 "params": { 00:14:29.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.105 "host": "nqn.2016-06.io.spdk:host1", 00:14:29.105 "psk": "key0" 00:14:29.105 } 00:14:29.105 }, 00:14:29.105 { 00:14:29.105 "method": "nvmf_subsystem_add_ns", 00:14:29.105 "params": { 00:14:29.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.105 "namespace": { 00:14:29.105 "nsid": 1, 00:14:29.105 "bdev_name": "malloc0", 00:14:29.105 "nguid": "4CDE7C4D141A4111BD68038D79D52BF5", 00:14:29.105 "uuid": "4cde7c4d-141a-4111-bd68-038d79d52bf5", 00:14:29.105 "no_auto_visible": false 00:14:29.105 } 00:14:29.105 } 00:14:29.105 }, 00:14:29.105 { 00:14:29.105 "method": "nvmf_subsystem_add_listener", 00:14:29.105 "params": { 00:14:29.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.105 "listen_address": { 00:14:29.105 "trtype": "TCP", 00:14:29.105 "adrfam": "IPv4", 00:14:29.105 "traddr": "10.0.0.3", 00:14:29.105 "trsvcid": "4420" 00:14:29.105 }, 00:14:29.105 "secure_channel": true 00:14:29.105 } 00:14:29.105 } 00:14:29.105 ] 00:14:29.105 } 00:14:29.105 ] 00:14:29.105 }' 00:14:29.105 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:29.673 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:14:29.673 "subsystems": [ 00:14:29.673 { 00:14:29.673 "subsystem": "keyring", 00:14:29.673 "config": [ 00:14:29.673 { 00:14:29.673 "method": "keyring_file_add_key", 00:14:29.673 "params": { 00:14:29.673 "name": "key0", 00:14:29.673 "path": "/tmp/tmp.oO4tMaUNe5" 00:14:29.673 } 00:14:29.673 } 00:14:29.673 ] 00:14:29.673 }, 00:14:29.673 { 00:14:29.673 "subsystem": "iobuf", 00:14:29.673 "config": [ 00:14:29.673 { 00:14:29.673 "method": "iobuf_set_options", 00:14:29.673 "params": { 00:14:29.673 "small_pool_count": 8192, 00:14:29.673 "large_pool_count": 1024, 00:14:29.673 "small_bufsize": 8192, 00:14:29.673 "large_bufsize": 135168, 00:14:29.673 "enable_numa": false 00:14:29.673 } 00:14:29.673 } 00:14:29.673 ] 00:14:29.673 }, 00:14:29.673 { 00:14:29.673 "subsystem": "sock", 00:14:29.673 "config": [ 00:14:29.673 { 00:14:29.673 "method": "sock_set_default_impl", 00:14:29.673 "params": { 00:14:29.673 "impl_name": "uring" 00:14:29.673 } 00:14:29.673 }, 00:14:29.673 { 00:14:29.673 "method": "sock_impl_set_options", 00:14:29.673 "params": { 00:14:29.673 "impl_name": "ssl", 00:14:29.673 "recv_buf_size": 4096, 00:14:29.673 "send_buf_size": 4096, 00:14:29.673 "enable_recv_pipe": true, 00:14:29.673 "enable_quickack": false, 00:14:29.673 "enable_placement_id": 0, 00:14:29.673 "enable_zerocopy_send_server": true, 00:14:29.673 "enable_zerocopy_send_client": false, 00:14:29.673 "zerocopy_threshold": 0, 00:14:29.673 "tls_version": 0, 00:14:29.673 "enable_ktls": false 00:14:29.673 } 00:14:29.673 }, 00:14:29.673 { 00:14:29.673 "method": "sock_impl_set_options", 00:14:29.673 "params": { 00:14:29.673 "impl_name": "posix", 00:14:29.673 "recv_buf_size": 2097152, 00:14:29.673 "send_buf_size": 2097152, 00:14:29.673 "enable_recv_pipe": true, 00:14:29.673 "enable_quickack": false, 00:14:29.673 "enable_placement_id": 0, 00:14:29.673 "enable_zerocopy_send_server": true, 00:14:29.673 "enable_zerocopy_send_client": false, 00:14:29.673 "zerocopy_threshold": 0, 00:14:29.673 "tls_version": 0, 00:14:29.673 "enable_ktls": false 00:14:29.673 } 00:14:29.673 }, 00:14:29.673 { 00:14:29.673 "method": "sock_impl_set_options", 00:14:29.673 "params": { 00:14:29.673 "impl_name": "uring", 00:14:29.673 "recv_buf_size": 2097152, 00:14:29.673 "send_buf_size": 2097152, 00:14:29.673 "enable_recv_pipe": true, 00:14:29.673 "enable_quickack": false, 00:14:29.673 "enable_placement_id": 0, 00:14:29.673 "enable_zerocopy_send_server": false, 00:14:29.673 "enable_zerocopy_send_client": false, 00:14:29.673 "zerocopy_threshold": 0, 00:14:29.673 "tls_version": 0, 00:14:29.673 "enable_ktls": false 00:14:29.673 } 00:14:29.673 } 00:14:29.673 ] 00:14:29.673 }, 00:14:29.673 { 00:14:29.673 "subsystem": "vmd", 00:14:29.673 "config": [] 00:14:29.673 }, 00:14:29.673 { 00:14:29.673 "subsystem": "accel", 00:14:29.673 "config": [ 00:14:29.673 { 00:14:29.673 "method": "accel_set_options", 00:14:29.673 "params": { 00:14:29.673 "small_cache_size": 128, 00:14:29.673 "large_cache_size": 16, 00:14:29.673 "task_count": 2048, 00:14:29.673 "sequence_count": 2048, 00:14:29.673 "buf_count": 2048 00:14:29.673 } 00:14:29.673 } 00:14:29.673 ] 00:14:29.673 }, 00:14:29.673 { 00:14:29.673 "subsystem": "bdev", 00:14:29.673 "config": [ 00:14:29.673 { 00:14:29.673 "method": "bdev_set_options", 00:14:29.673 "params": { 00:14:29.673 "bdev_io_pool_size": 65535, 00:14:29.673 "bdev_io_cache_size": 256, 00:14:29.673 "bdev_auto_examine": true, 00:14:29.673 "iobuf_small_cache_size": 128, 00:14:29.673 "iobuf_large_cache_size": 16 00:14:29.673 } 00:14:29.673 }, 00:14:29.673 { 00:14:29.673 "method": "bdev_raid_set_options", 00:14:29.673 "params": { 00:14:29.673 "process_window_size_kb": 1024, 00:14:29.673 "process_max_bandwidth_mb_sec": 0 00:14:29.673 } 00:14:29.673 }, 00:14:29.673 { 00:14:29.673 "method": "bdev_iscsi_set_options", 00:14:29.673 "params": { 00:14:29.673 "timeout_sec": 30 00:14:29.673 } 00:14:29.673 }, 00:14:29.673 { 00:14:29.673 "method": "bdev_nvme_set_options", 00:14:29.673 "params": { 00:14:29.673 "action_on_timeout": "none", 00:14:29.673 "timeout_us": 0, 00:14:29.673 "timeout_admin_us": 0, 00:14:29.673 "keep_alive_timeout_ms": 10000, 00:14:29.673 "arbitration_burst": 0, 00:14:29.673 "low_priority_weight": 0, 00:14:29.673 "medium_priority_weight": 0, 00:14:29.673 "high_priority_weight": 0, 00:14:29.673 "nvme_adminq_poll_period_us": 10000, 00:14:29.673 "nvme_ioq_poll_period_us": 0, 00:14:29.673 "io_queue_requests": 512, 00:14:29.673 "delay_cmd_submit": true, 00:14:29.673 "transport_retry_count": 4, 00:14:29.673 "bdev_retry_count": 3, 00:14:29.673 "transport_ack_timeout": 0, 00:14:29.673 "ctrlr_loss_timeout_sec": 0, 00:14:29.673 "reconnect_delay_sec": 0, 00:14:29.673 "fast_io_fail_timeout_sec": 0, 00:14:29.673 "disable_auto_failback": false, 00:14:29.673 "generate_uuids": false, 00:14:29.673 "transport_tos": 0, 00:14:29.673 "nvme_error_stat": false, 00:14:29.673 "rdma_srq_size": 0, 00:14:29.673 "io_path_stat": false, 00:14:29.673 "allow_accel_sequence": false, 00:14:29.673 "rdma_max_cq_size": 0, 00:14:29.673 "rdma_cm_event_timeout_ms": 0, 00:14:29.673 "dhchap_digests": [ 00:14:29.673 "sha256", 00:14:29.673 "sha384", 00:14:29.673 "sha512" 00:14:29.673 ], 00:14:29.673 "dhchap_dhgroups": [ 00:14:29.673 "null", 00:14:29.673 "ffdhe2048", 00:14:29.673 "ffdhe3072", 00:14:29.673 "ffdhe4096", 00:14:29.673 "ffdhe6144", 00:14:29.673 "ffdhe8192" 00:14:29.673 ], 00:14:29.673 "rdma_umr_per_io": false 00:14:29.673 } 00:14:29.673 }, 00:14:29.673 { 00:14:29.673 "method": "bdev_nvme_attach_controller", 00:14:29.673 "params": { 00:14:29.673 "name": "TLSTEST", 00:14:29.673 "trtype": "TCP", 00:14:29.673 "adrfam": "IPv4", 00:14:29.673 "traddr": "10.0.0.3", 00:14:29.673 "trsvcid": "4420", 00:14:29.673 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.673 "prchk_reftag": false, 00:14:29.673 "prchk_guard": false, 00:14:29.673 "ctrlr_loss_timeout_sec": 0, 00:14:29.673 "reconnect_delay_sec": 0, 00:14:29.673 "fast_io_fail_timeout_sec": 0, 00:14:29.673 "psk": "key0", 00:14:29.673 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:29.673 "hdgst": false, 00:14:29.673 "ddgst": false, 00:14:29.673 "multipath": "multipath" 00:14:29.673 } 00:14:29.673 }, 00:14:29.673 { 00:14:29.673 "method": "bdev_nvme_set_hotplug", 00:14:29.673 "params": { 00:14:29.673 "period_us": 100000, 00:14:29.673 "enable": false 00:14:29.673 } 00:14:29.673 }, 00:14:29.673 { 00:14:29.673 "method": "bdev_wait_for_examine" 00:14:29.673 } 00:14:29.673 ] 00:14:29.673 }, 00:14:29.673 { 00:14:29.673 "subsystem": "nbd", 00:14:29.673 "config": [] 00:14:29.673 } 00:14:29.673 ] 00:14:29.673 }' 00:14:29.673 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 73290 00:14:29.673 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73290 ']' 00:14:29.673 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73290 00:14:29.673 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:29.673 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.673 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73290 00:14:29.673 killing process with pid 73290 00:14:29.673 Received shutdown signal, test time was about 10.000000 seconds 00:14:29.673 00:14:29.673 Latency(us) 00:14:29.673 [2024-12-11T13:55:22.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.674 [2024-12-11T13:55:22.721Z] =================================================================================================================== 00:14:29.674 [2024-12-11T13:55:22.721Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:29.674 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:29.674 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:29.674 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73290' 00:14:29.674 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73290 00:14:29.674 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73290 00:14:29.674 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 73234 00:14:29.674 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73234 ']' 00:14:29.674 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73234 00:14:29.674 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:29.674 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.674 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73234 00:14:29.674 killing process with pid 73234 00:14:29.674 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:29.674 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:29.674 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73234' 00:14:29.674 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73234 00:14:29.674 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73234 00:14:30.242 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:30.242 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:14:30.242 "subsystems": [ 00:14:30.242 { 00:14:30.242 "subsystem": "keyring", 00:14:30.242 "config": [ 00:14:30.242 { 00:14:30.242 "method": "keyring_file_add_key", 00:14:30.242 "params": { 00:14:30.242 "name": "key0", 00:14:30.242 "path": "/tmp/tmp.oO4tMaUNe5" 00:14:30.242 } 00:14:30.242 } 00:14:30.242 ] 00:14:30.242 }, 00:14:30.242 { 00:14:30.242 "subsystem": "iobuf", 00:14:30.242 "config": [ 00:14:30.242 { 00:14:30.242 "method": "iobuf_set_options", 00:14:30.242 "params": { 00:14:30.242 "small_pool_count": 8192, 00:14:30.242 "large_pool_count": 1024, 00:14:30.242 "small_bufsize": 8192, 00:14:30.242 "large_bufsize": 135168, 00:14:30.242 "enable_numa": false 00:14:30.242 } 00:14:30.242 } 00:14:30.242 ] 00:14:30.242 }, 00:14:30.242 { 00:14:30.242 "subsystem": "sock", 00:14:30.242 "config": [ 00:14:30.242 { 00:14:30.242 "method": "sock_set_default_impl", 00:14:30.242 "params": { 00:14:30.242 "impl_name": "uring" 00:14:30.242 } 00:14:30.242 }, 00:14:30.242 { 00:14:30.242 "method": "sock_impl_set_options", 00:14:30.242 "params": { 00:14:30.242 "impl_name": "ssl", 00:14:30.242 "recv_buf_size": 4096, 00:14:30.242 "send_buf_size": 4096, 00:14:30.242 "enable_recv_pipe": true, 00:14:30.242 "enable_quickack": false, 00:14:30.242 "enable_placement_id": 0, 00:14:30.242 "enable_zerocopy_send_server": true, 00:14:30.242 "enable_zerocopy_send_client": false, 00:14:30.242 "zerocopy_threshold": 0, 00:14:30.242 "tls_version": 0, 00:14:30.242 "enable_ktls": false 00:14:30.242 } 00:14:30.242 }, 00:14:30.242 { 00:14:30.242 "method": "sock_impl_set_options", 00:14:30.242 "params": { 00:14:30.242 "impl_name": "posix", 00:14:30.242 "recv_buf_size": 2097152, 00:14:30.242 "send_buf_size": 2097152, 00:14:30.242 "enable_recv_pipe": true, 00:14:30.242 "enable_quickack": false, 00:14:30.242 "enable_placement_id": 0, 00:14:30.242 "enable_zerocopy_send_server": true, 00:14:30.242 "enable_zerocopy_send_client": false, 00:14:30.242 "zerocopy_threshold": 0, 00:14:30.242 "tls_version": 0, 00:14:30.242 "enable_ktls": false 00:14:30.242 } 00:14:30.242 }, 00:14:30.242 { 00:14:30.242 "method": "sock_impl_set_options", 00:14:30.242 "params": { 00:14:30.242 "impl_name": "uring", 00:14:30.242 "recv_buf_size": 2097152, 00:14:30.242 "send_buf_size": 2097152, 00:14:30.242 "enable_recv_pipe": true, 00:14:30.242 "enable_quickack": false, 00:14:30.242 "enable_placement_id": 0, 00:14:30.242 "enable_zerocopy_send_server": false, 00:14:30.242 "enable_zerocopy_send_client": false, 00:14:30.242 "zerocopy_threshold": 0, 00:14:30.242 "tls_version": 0, 00:14:30.242 "enable_ktls": false 00:14:30.242 } 00:14:30.242 } 00:14:30.242 ] 00:14:30.242 }, 00:14:30.242 { 00:14:30.242 "subsystem": "vmd", 00:14:30.242 "config": [] 00:14:30.242 }, 00:14:30.242 { 00:14:30.242 "subsystem": "accel", 00:14:30.242 "config": [ 00:14:30.242 { 00:14:30.242 "method": "accel_set_options", 00:14:30.242 "params": { 00:14:30.242 "small_cache_size": 128, 00:14:30.242 "large_cache_size": 16, 00:14:30.242 "task_count": 2048, 00:14:30.242 "sequence_count": 2048, 00:14:30.242 "buf_count": 2048 00:14:30.242 } 00:14:30.242 } 00:14:30.242 ] 00:14:30.242 }, 00:14:30.242 { 00:14:30.242 "subsystem": "bdev", 00:14:30.242 "config": [ 00:14:30.242 { 00:14:30.242 "method": "bdev_set_options", 00:14:30.242 "params": { 00:14:30.242 "bdev_io_pool_size": 65535, 00:14:30.242 "bdev_io_cache_size": 256, 00:14:30.242 "bdev_auto_examine": true, 00:14:30.242 "iobuf_small_cache_size": 128, 00:14:30.242 "iobuf_large_cache_size": 16 00:14:30.242 } 00:14:30.242 }, 00:14:30.242 { 00:14:30.242 "method": "bdev_raid_set_options", 00:14:30.242 "params": { 00:14:30.242 "process_window_size_kb": 1024, 00:14:30.242 "process_max_bandwidth_mb_sec": 0 00:14:30.242 } 00:14:30.242 }, 00:14:30.242 { 00:14:30.242 "method": "bdev_iscsi_set_options", 00:14:30.242 "params": { 00:14:30.242 "timeout_sec": 30 00:14:30.242 } 00:14:30.242 }, 00:14:30.242 { 00:14:30.242 "method": "bdev_nvme_set_options", 00:14:30.242 "params": { 00:14:30.242 "action_on_timeout": "none", 00:14:30.242 "timeout_us": 0, 00:14:30.242 "timeout_admin_us": 0, 00:14:30.242 "keep_alive_timeout_ms": 10000, 00:14:30.242 "arbitration_burst": 0, 00:14:30.242 "low_priority_weight": 0, 00:14:30.242 "medium_priority_weight": 0, 00:14:30.242 "high_priority_weight": 0, 00:14:30.242 "nvme_adminq_poll_period_us": 10000, 00:14:30.242 "nvme_ioq_poll_period_us": 0, 00:14:30.242 "io_queue_requests": 0, 00:14:30.242 "delay_cmd_submit": true, 00:14:30.242 "transport_retry_count": 4, 00:14:30.242 "bdev_retry_count": 3, 00:14:30.242 "transport_ack_timeout": 0, 00:14:30.242 "ctrlr_loss_timeout_sec": 0, 00:14:30.242 "reconnect_delay_sec": 0, 00:14:30.242 "fast_io_fail_timeout_sec": 0, 00:14:30.242 "disable_auto_failback": false, 00:14:30.242 "generate_uuids": false, 00:14:30.242 "transport_tos": 0, 00:14:30.242 "nvme_error_stat": false, 00:14:30.242 "rdma_srq_size": 0, 00:14:30.242 "io_path_stat": false, 00:14:30.242 "allow_accel_sequence": false, 00:14:30.242 "rdma_max_cq_size": 0, 00:14:30.242 "rdma_cm_event_timeout_ms": 0, 00:14:30.242 "dhchap_digests": [ 00:14:30.242 "sha256", 00:14:30.242 "sha384", 00:14:30.242 "sha512" 00:14:30.242 ], 00:14:30.242 "dhchap_dhgroups": [ 00:14:30.242 "null", 00:14:30.242 "ffdhe2048", 00:14:30.242 "ffdhe3072", 00:14:30.242 "ffdhe4096", 00:14:30.242 "ffdhe6144", 00:14:30.242 "ffdhe8192" 00:14:30.242 ], 00:14:30.242 "rdma_umr_per_io": false 00:14:30.242 } 00:14:30.242 }, 00:14:30.242 { 00:14:30.242 "method": "bdev_nvme_set_hotplug", 00:14:30.242 "params": { 00:14:30.242 "period_us": 100000, 00:14:30.242 "enable": false 00:14:30.243 } 00:14:30.243 }, 00:14:30.243 { 00:14:30.243 "method": "bdev_malloc_create", 00:14:30.243 "params": { 00:14:30.243 "name": "malloc0", 00:14:30.243 "num_blocks": 8192, 00:14:30.243 "block_size": 4096, 00:14:30.243 "physical_block_size": 4096, 00:14:30.243 "uuid": "4cde7c4d-141a-4111-bd68-038d79d52bf5", 00:14:30.243 "optimal_io_boundary": 0, 00:14:30.243 "md_size": 0, 00:14:30.243 "dif_type": 0, 00:14:30.243 "dif_is_head_of_md": false, 00:14:30.243 "dif_pi_format": 0 00:14:30.243 } 00:14:30.243 }, 00:14:30.243 { 00:14:30.243 "method": "bdev_wait_for_examine" 00:14:30.243 } 00:14:30.243 ] 00:14:30.243 }, 00:14:30.243 { 00:14:30.243 "subsystem": "nbd", 00:14:30.243 "config": [] 00:14:30.243 }, 00:14:30.243 { 00:14:30.243 "subsystem": "scheduler", 00:14:30.243 "config": [ 00:14:30.243 { 00:14:30.243 "method": "framework_set_scheduler", 00:14:30.243 "params": { 00:14:30.243 "name": "static" 00:14:30.243 } 00:14:30.243 } 00:14:30.243 ] 00:14:30.243 }, 00:14:30.243 { 00:14:30.243 "subsystem": "nvmf", 00:14:30.243 "config": [ 00:14:30.243 { 00:14:30.243 "method": "nvmf_set_config", 00:14:30.243 "params": { 00:14:30.243 "discovery_filter": "match_any", 00:14:30.243 "admin_cmd_passthru": { 00:14:30.243 "identify_ctrlr": false 00:14:30.243 }, 00:14:30.243 "dhchap_digests": [ 00:14:30.243 "sha256", 00:14:30.243 "sha384", 00:14:30.243 "sha512" 00:14:30.243 ], 00:14:30.243 "dhchap_dhgroups": [ 00:14:30.243 "null", 00:14:30.243 "ffdhe2048", 00:14:30.243 "ffdhe3072", 00:14:30.243 "ffdhe4096", 00:14:30.243 "ffdhe6144", 00:14:30.243 "ffdhe8192" 00:14:30.243 ] 00:14:30.243 } 00:14:30.243 }, 00:14:30.243 { 00:14:30.243 "method": "nvmf_set_max_subsystems", 00:14:30.243 "params": { 00:14:30.243 "max_subsystems": 1024 00:14:30.243 } 00:14:30.243 }, 00:14:30.243 { 00:14:30.243 "method": "nvmf_set_crdt", 00:14:30.243 "params": { 00:14:30.243 "crdt1": 0, 00:14:30.243 "crdt2": 0, 00:14:30.243 "crdt3": 0 00:14:30.243 } 00:14:30.243 }, 00:14:30.243 { 00:14:30.243 "method": "nvmf_create_transport", 00:14:30.243 "params": { 00:14:30.243 "trtype": "TCP", 00:14:30.243 "max_queue_depth": 128, 00:14:30.243 "max_io_qpairs_per_ctrlr": 127, 00:14:30.243 "in_capsule_data_size": 4096, 00:14:30.243 "max_io_size": 131072, 00:14:30.243 "io_unit_size": 131072, 00:14:30.243 "max_aq_depth": 128, 00:14:30.243 "num_shared_buffers": 511, 00:14:30.243 "buf_cache_size": 4294967295, 00:14:30.243 "dif_insert_or_strip": false, 00:14:30.243 "zcopy": false, 00:14:30.243 "c2h_success": false, 00:14:30.243 "sock_priority": 0, 00:14:30.243 "abort_timeout_sec": 1, 00:14:30.243 "ack_timeout": 0, 00:14:30.243 "data_wr_pool_size": 0 00:14:30.243 } 00:14:30.243 }, 00:14:30.243 { 00:14:30.243 "method": "nvmf_create_subsystem", 00:14:30.243 "params": { 00:14:30.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.243 "allow_any_host": false, 00:14:30.243 "serial_number": "SPDK00000000000001", 00:14:30.243 "model_number": "SPDK bdev Controller", 00:14:30.243 "max_namespaces": 10, 00:14:30.243 "min_cntlid": 1, 00:14:30.243 "max_cntlid": 65519, 00:14:30.243 "ana_reporting": false 00:14:30.243 } 00:14:30.243 }, 00:14:30.243 { 00:14:30.243 "method": "nvmf_subsystem_add_host", 00:14:30.243 "params": { 00:14:30.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.243 "host": "nqn.2016-06.io.spdk:host1", 00:14:30.243 "psk": "key0" 00:14:30.243 } 00:14:30.243 }, 00:14:30.243 { 00:14:30.243 "method": "nvmf_subsystem_add_ns", 00:14:30.243 "params": { 00:14:30.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.243 "namespace": { 00:14:30.243 "nsid": 1, 00:14:30.243 "bdev_name": "malloc0", 00:14:30.243 "nguid": "4CDE7C4D141A4111BD68038D79D52BF5", 00:14:30.243 "uuid": "4cde7c4d-141a-4111-bd68-038d79d52bf5", 00:14:30.243 "no_auto_visible": false 00:14:30.243 } 00:14:30.243 } 00:14:30.243 }, 00:14:30.243 { 00:14:30.243 "method": "nvmf_subsystem_add_listener", 00:14:30.243 "params": { 00:14:30.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.243 "listen_address": { 00:14:30.243 "trtype": "TCP", 00:14:30.243 "adrfam": "IPv4", 00:14:30.243 "traddr": "10.0.0.3", 00:14:30.243 "trsvcid": "4420" 00:14:30.243 }, 00:14:30.243 "secure_channel": true 00:14:30.243 } 00:14:30.243 } 00:14:30.243 ] 00:14:30.243 } 00:14:30.243 ] 00:14:30.243 }' 00:14:30.243 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:30.243 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:30.243 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.243 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=73345 00:14:30.243 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:30.243 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 73345 00:14:30.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.243 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73345 ']' 00:14:30.243 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.243 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:30.243 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.243 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:30.243 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.243 [2024-12-11 13:55:23.104105] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:30.243 [2024-12-11 13:55:23.104567] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.243 [2024-12-11 13:55:23.257898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.502 [2024-12-11 13:55:23.336190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.502 [2024-12-11 13:55:23.337371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.502 [2024-12-11 13:55:23.337409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.502 [2024-12-11 13:55:23.337419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.502 [2024-12-11 13:55:23.337426] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.502 [2024-12-11 13:55:23.338075] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.502 [2024-12-11 13:55:23.508553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:30.761 [2024-12-11 13:55:23.595240] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.761 [2024-12-11 13:55:23.627215] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:30.761 [2024-12-11 13:55:23.627468] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:31.327 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:31.327 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:31.327 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:31.327 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:31.327 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:31.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.327 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.327 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=73378 00:14:31.327 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 73378 /var/tmp/bdevperf.sock 00:14:31.327 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73378 ']' 00:14:31.327 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.327 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.327 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.327 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.327 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:14:31.327 "subsystems": [ 00:14:31.327 { 00:14:31.327 "subsystem": "keyring", 00:14:31.327 "config": [ 00:14:31.327 { 00:14:31.327 "method": "keyring_file_add_key", 00:14:31.327 "params": { 00:14:31.328 "name": "key0", 00:14:31.328 "path": "/tmp/tmp.oO4tMaUNe5" 00:14:31.328 } 00:14:31.328 } 00:14:31.328 ] 00:14:31.328 }, 00:14:31.328 { 00:14:31.328 "subsystem": "iobuf", 00:14:31.328 "config": [ 00:14:31.328 { 00:14:31.328 "method": "iobuf_set_options", 00:14:31.328 "params": { 00:14:31.328 "small_pool_count": 8192, 00:14:31.328 "large_pool_count": 1024, 00:14:31.328 "small_bufsize": 8192, 00:14:31.328 "large_bufsize": 135168, 00:14:31.328 "enable_numa": false 00:14:31.328 } 00:14:31.328 } 00:14:31.328 ] 00:14:31.328 }, 00:14:31.328 { 00:14:31.328 "subsystem": "sock", 00:14:31.328 "config": [ 00:14:31.328 { 00:14:31.328 "method": "sock_set_default_impl", 00:14:31.328 "params": { 00:14:31.328 "impl_name": "uring" 00:14:31.328 } 00:14:31.328 }, 00:14:31.328 { 00:14:31.328 "method": "sock_impl_set_options", 00:14:31.328 "params": { 00:14:31.328 "impl_name": "ssl", 00:14:31.328 "recv_buf_size": 4096, 00:14:31.328 "send_buf_size": 4096, 00:14:31.328 "enable_recv_pipe": true, 00:14:31.328 "enable_quickack": false, 00:14:31.328 "enable_placement_id": 0, 00:14:31.328 "enable_zerocopy_send_server": true, 00:14:31.328 "enable_zerocopy_send_client": false, 00:14:31.328 "zerocopy_threshold": 0, 00:14:31.328 "tls_version": 0, 00:14:31.328 "enable_ktls": false 00:14:31.328 } 00:14:31.328 }, 00:14:31.328 { 00:14:31.328 "method": "sock_impl_set_options", 00:14:31.328 "params": { 00:14:31.328 "impl_name": "posix", 00:14:31.328 "recv_buf_size": 2097152, 00:14:31.328 "send_buf_size": 2097152, 00:14:31.328 "enable_recv_pipe": true, 00:14:31.328 "enable_quickack": false, 00:14:31.328 "enable_placement_id": 0, 00:14:31.328 "enable_zerocopy_send_server": true, 00:14:31.328 "enable_zerocopy_send_client": false, 00:14:31.328 "zerocopy_threshold": 0, 00:14:31.328 "tls_version": 0, 00:14:31.328 "enable_ktls": false 00:14:31.328 } 00:14:31.328 }, 00:14:31.328 { 00:14:31.328 "method": "sock_impl_set_options", 00:14:31.328 "params": { 00:14:31.328 "impl_name": "uring", 00:14:31.328 "recv_buf_size": 2097152, 00:14:31.328 "send_buf_size": 2097152, 00:14:31.328 "enable_recv_pipe": true, 00:14:31.328 "enable_quickack": false, 00:14:31.328 "enable_placement_id": 0, 00:14:31.328 "enable_zerocopy_send_server": false, 00:14:31.328 "enable_zerocopy_send_client": false, 00:14:31.328 "zerocopy_threshold": 0, 00:14:31.328 "tls_version": 0, 00:14:31.328 "enable_ktls": false 00:14:31.328 } 00:14:31.328 } 00:14:31.328 ] 00:14:31.328 }, 00:14:31.328 { 00:14:31.328 "subsystem": "vmd", 00:14:31.328 "config": [] 00:14:31.328 }, 00:14:31.328 { 00:14:31.328 "subsystem": "accel", 00:14:31.328 "config": [ 00:14:31.328 { 00:14:31.328 "method": "accel_set_options", 00:14:31.328 "params": { 00:14:31.328 "small_cache_size": 128, 00:14:31.328 "large_cache_size": 16, 00:14:31.328 "task_count": 2048, 00:14:31.328 "sequence_count": 2048, 00:14:31.328 "buf_count": 2048 00:14:31.328 } 00:14:31.328 } 00:14:31.328 ] 00:14:31.328 }, 00:14:31.328 { 00:14:31.328 "subsystem": "bdev", 00:14:31.328 "config": [ 00:14:31.328 { 00:14:31.328 "method": "bdev_set_options", 00:14:31.328 "params": { 00:14:31.328 "bdev_io_pool_size": 65535, 00:14:31.328 "bdev_io_cache_size": 256, 00:14:31.328 "bdev_auto_examine": true, 00:14:31.328 "iobuf_small_cache_size": 128, 00:14:31.328 "iobuf_large_cache_size": 16 00:14:31.328 } 00:14:31.328 }, 00:14:31.328 { 00:14:31.328 "method": "bdev_raid_set_options", 00:14:31.328 "params": { 00:14:31.328 "process_window_size_kb": 1024, 00:14:31.328 "process_max_bandwidth_mb_sec": 0 00:14:31.328 } 00:14:31.328 }, 00:14:31.328 { 00:14:31.328 "method": "bdev_iscsi_set_options", 00:14:31.328 "params": { 00:14:31.328 "timeout_sec": 30 00:14:31.328 } 00:14:31.328 }, 00:14:31.328 { 00:14:31.328 "method": "bdev_nvme_set_options", 00:14:31.328 "params": { 00:14:31.328 "action_on_timeout": "none", 00:14:31.328 "timeout_us": 0, 00:14:31.328 "timeout_admin_us": 0, 00:14:31.328 "keep_alive_timeout_ms": 10000, 00:14:31.328 "arbitration_burst": 0, 00:14:31.328 "low_priority_weight": 0, 00:14:31.328 "medium_priority_weight": 0, 00:14:31.328 "high_priority_weight": 0, 00:14:31.328 "nvme_adminq_poll_period_us": 10000, 00:14:31.328 "nvme_ioq_poll_period_us": 0, 00:14:31.328 "io_queue_requests": 512, 00:14:31.328 "delay_cmd_submit": true, 00:14:31.328 "transport_retry_count": 4, 00:14:31.328 "bdev_retry_count": 3, 00:14:31.328 "transport_ack_timeout": 0, 00:14:31.328 "ctrlr_loss_timeout_sec": 0, 00:14:31.328 "reconnect_delay_sec": 0, 00:14:31.328 "fast_io_fail_timeout_sec": 0, 00:14:31.328 "disable_auto_failback": false, 00:14:31.328 "generate_uuids": false, 00:14:31.328 "transport_tos": 0, 00:14:31.328 "nvme_error_stat": false, 00:14:31.328 "rdma_srq_size": 0, 00:14:31.328 "io_path_stat": false, 00:14:31.328 "allow_accel_sequence": false, 00:14:31.328 "rdma_max_cq_size": 0, 00:14:31.328 "rdma_cm_event_timeout_ms": 0, 00:14:31.328 "dhchap_digests": [ 00:14:31.328 "sha256", 00:14:31.328 "sha384", 00:14:31.328 "sha512" 00:14:31.328 ], 00:14:31.328 "dhchap_dhgroups": [ 00:14:31.328 "null", 00:14:31.328 "ffdhe2048", 00:14:31.328 "ffdhe3072", 00:14:31.328 "ffdhe4096", 00:14:31.328 "ffdhe6144", 00:14:31.328 "ffdhe8192" 00:14:31.328 ], 00:14:31.328 "rdma_umr_per_io": false 00:14:31.328 } 00:14:31.328 }, 00:14:31.328 { 00:14:31.328 "method": "bdev_nvme_attach_controller", 00:14:31.328 "params": { 00:14:31.328 "name": "TLSTEST", 00:14:31.328 "trtype": "TCP", 00:14:31.328 "adrfam": "IPv4", 00:14:31.328 "traddr": "10.0.0.3", 00:14:31.328 "trsvcid": "4420", 00:14:31.328 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.328 "prchk_reftag": false, 00:14:31.328 "prchk_guard": false, 00:14:31.328 "ctrlr_loss_timeout_sec": 0, 00:14:31.328 "reconnect_delay_sec": 0, 00:14:31.328 "fast_io_fail_timeout_sec": 0, 00:14:31.328 "psk": "key0", 00:14:31.328 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:31.328 "hdgst": false, 00:14:31.328 "ddgst": false, 00:14:31.328 "multipath": "multipath" 00:14:31.328 } 00:14:31.328 }, 00:14:31.328 { 00:14:31.328 "method": "bdev_nvme_set_hotplug", 00:14:31.328 "params": { 00:14:31.328 "period_us": 100000, 00:14:31.328 "enable": false 00:14:31.328 } 00:14:31.328 }, 00:14:31.328 { 00:14:31.328 "method": "bdev_wait_for_examine" 00:14:31.328 } 00:14:31.328 ] 00:14:31.328 }, 00:14:31.328 { 00:14:31.328 "subsystem": "nbd", 00:14:31.328 "config": [] 00:14:31.328 } 00:14:31.328 ] 00:14:31.328 }' 00:14:31.328 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:31.328 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:31.328 [2024-12-11 13:55:24.241582] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:31.328 [2024-12-11 13:55:24.242604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73378 ] 00:14:31.587 [2024-12-11 13:55:24.399077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.587 [2024-12-11 13:55:24.465500] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.587 [2024-12-11 13:55:24.610959] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:31.845 [2024-12-11 13:55:24.666862] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:32.414 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.414 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:32.414 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:32.414 Running I/O for 10 seconds... 00:14:34.361 3951.00 IOPS, 15.43 MiB/s [2024-12-11T13:55:28.783Z] 3968.00 IOPS, 15.50 MiB/s [2024-12-11T13:55:29.719Z] 3952.67 IOPS, 15.44 MiB/s [2024-12-11T13:55:30.653Z] 3943.25 IOPS, 15.40 MiB/s [2024-12-11T13:55:31.590Z] 3938.60 IOPS, 15.39 MiB/s [2024-12-11T13:55:32.568Z] 3933.17 IOPS, 15.36 MiB/s [2024-12-11T13:55:33.503Z] 3931.00 IOPS, 15.36 MiB/s [2024-12-11T13:55:34.438Z] 3937.12 IOPS, 15.38 MiB/s [2024-12-11T13:55:35.813Z] 3934.67 IOPS, 15.37 MiB/s [2024-12-11T13:55:35.813Z] 3931.50 IOPS, 15.36 MiB/s 00:14:42.766 Latency(us) 00:14:42.766 [2024-12-11T13:55:35.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.766 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:42.766 Verification LBA range: start 0x0 length 0x2000 00:14:42.766 TLSTESTn1 : 10.02 3937.31 15.38 0.00 0.00 32448.12 6285.50 23950.43 00:14:42.766 [2024-12-11T13:55:35.813Z] =================================================================================================================== 00:14:42.766 [2024-12-11T13:55:35.813Z] Total : 3937.31 15.38 0.00 0.00 32448.12 6285.50 23950.43 00:14:42.766 { 00:14:42.766 "results": [ 00:14:42.766 { 00:14:42.766 "job": "TLSTESTn1", 00:14:42.766 "core_mask": "0x4", 00:14:42.766 "workload": "verify", 00:14:42.766 "status": "finished", 00:14:42.766 "verify_range": { 00:14:42.766 "start": 0, 00:14:42.766 "length": 8192 00:14:42.766 }, 00:14:42.766 "queue_depth": 128, 00:14:42.766 "io_size": 4096, 00:14:42.766 "runtime": 10.017761, 00:14:42.766 "iops": 3937.306949127654, 00:14:42.766 "mibps": 15.3801052700299, 00:14:42.766 "io_failed": 0, 00:14:42.766 "io_timeout": 0, 00:14:42.766 "avg_latency_us": 32448.121293373864, 00:14:42.766 "min_latency_us": 6285.498181818181, 00:14:42.766 "max_latency_us": 23950.429090909092 00:14:42.766 } 00:14:42.766 ], 00:14:42.766 "core_count": 1 00:14:42.766 } 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 73378 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73378 ']' 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73378 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73378 00:14:42.766 killing process with pid 73378 00:14:42.766 Received shutdown signal, test time was about 10.000000 seconds 00:14:42.766 00:14:42.766 Latency(us) 00:14:42.766 [2024-12-11T13:55:35.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.766 [2024-12-11T13:55:35.813Z] =================================================================================================================== 00:14:42.766 [2024-12-11T13:55:35.813Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73378' 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73378 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73378 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 73345 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73345 ']' 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73345 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73345 00:14:42.766 killing process with pid 73345 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73345' 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73345 00:14:42.766 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73345 00:14:43.025 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:14:43.025 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:43.025 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:43.025 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.025 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=73511 00:14:43.025 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:43.025 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 73511 00:14:43.025 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73511 ']' 00:14:43.025 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.025 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.025 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.025 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.025 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.025 [2024-12-11 13:55:36.012700] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:43.025 [2024-12-11 13:55:36.014093] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.283 [2024-12-11 13:55:36.171604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.283 [2024-12-11 13:55:36.242414] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.283 [2024-12-11 13:55:36.242480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.283 [2024-12-11 13:55:36.242495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.283 [2024-12-11 13:55:36.242506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.283 [2024-12-11 13:55:36.242515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.283 [2024-12-11 13:55:36.243074] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.283 [2024-12-11 13:55:36.306051] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:44.219 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.219 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:44.219 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:44.219 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:44.219 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.219 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.219 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.oO4tMaUNe5 00:14:44.219 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oO4tMaUNe5 00:14:44.219 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:44.477 [2024-12-11 13:55:37.392952] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.477 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:44.735 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:44.993 [2024-12-11 13:55:37.981130] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:44.993 [2024-12-11 13:55:37.981445] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:44.993 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:45.560 malloc0 00:14:45.560 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:45.818 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oO4tMaUNe5 00:14:46.076 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:46.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:46.334 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=73572 00:14:46.334 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:46.334 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:46.334 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 73572 /var/tmp/bdevperf.sock 00:14:46.334 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73572 ']' 00:14:46.334 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:46.334 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.334 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:46.334 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.334 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.334 [2024-12-11 13:55:39.222817] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:46.334 [2024-12-11 13:55:39.223190] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73572 ] 00:14:46.334 [2024-12-11 13:55:39.368001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.592 [2024-12-11 13:55:39.434446] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.592 [2024-12-11 13:55:39.492737] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:46.592 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.592 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:46.592 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oO4tMaUNe5 00:14:46.849 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:47.107 [2024-12-11 13:55:40.103919] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:47.364 nvme0n1 00:14:47.364 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:47.364 Running I/O for 1 seconds... 00:14:48.297 4105.00 IOPS, 16.04 MiB/s 00:14:48.297 Latency(us) 00:14:48.297 [2024-12-11T13:55:41.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.297 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:48.297 Verification LBA range: start 0x0 length 0x2000 00:14:48.297 nvme0n1 : 1.02 4164.68 16.27 0.00 0.00 30459.32 6076.97 25380.31 00:14:48.297 [2024-12-11T13:55:41.344Z] =================================================================================================================== 00:14:48.297 [2024-12-11T13:55:41.344Z] Total : 4164.68 16.27 0.00 0.00 30459.32 6076.97 25380.31 00:14:48.297 { 00:14:48.297 "results": [ 00:14:48.297 { 00:14:48.297 "job": "nvme0n1", 00:14:48.297 "core_mask": "0x2", 00:14:48.297 "workload": "verify", 00:14:48.297 "status": "finished", 00:14:48.297 "verify_range": { 00:14:48.297 "start": 0, 00:14:48.297 "length": 8192 00:14:48.297 }, 00:14:48.297 "queue_depth": 128, 00:14:48.297 "io_size": 4096, 00:14:48.297 "runtime": 1.016405, 00:14:48.297 "iops": 4164.6784500273025, 00:14:48.297 "mibps": 16.26827519541915, 00:14:48.297 "io_failed": 0, 00:14:48.297 "io_timeout": 0, 00:14:48.297 "avg_latency_us": 30459.32082383008, 00:14:48.297 "min_latency_us": 6076.9745454545455, 00:14:48.297 "max_latency_us": 25380.305454545454 00:14:48.297 } 00:14:48.297 ], 00:14:48.297 "core_count": 1 00:14:48.297 } 00:14:48.555 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 73572 00:14:48.555 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73572 ']' 00:14:48.555 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73572 00:14:48.555 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:48.555 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.555 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73572 00:14:48.555 killing process with pid 73572 00:14:48.555 Received shutdown signal, test time was about 1.000000 seconds 00:14:48.555 00:14:48.555 Latency(us) 00:14:48.555 [2024-12-11T13:55:41.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.555 [2024-12-11T13:55:41.602Z] =================================================================================================================== 00:14:48.555 [2024-12-11T13:55:41.602Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:48.555 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:48.555 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:48.555 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73572' 00:14:48.555 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73572 00:14:48.555 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73572 00:14:48.555 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 73511 00:14:48.555 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73511 ']' 00:14:48.555 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73511 00:14:48.555 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:48.555 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.555 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73511 00:14:48.813 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:48.813 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:48.813 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73511' 00:14:48.813 killing process with pid 73511 00:14:48.813 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73511 00:14:48.813 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73511 00:14:48.813 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:14:48.813 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:48.813 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:48.813 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:48.813 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=73616 00:14:48.813 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:48.813 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 73616 00:14:48.813 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73616 ']' 00:14:48.813 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.813 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.813 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.813 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.813 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.088 [2024-12-11 13:55:41.901521] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:49.088 [2024-12-11 13:55:41.901969] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.088 [2024-12-11 13:55:42.043841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.088 [2024-12-11 13:55:42.105232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.088 [2024-12-11 13:55:42.105578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.088 [2024-12-11 13:55:42.105598] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.088 [2024-12-11 13:55:42.105607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.088 [2024-12-11 13:55:42.105614] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.088 [2024-12-11 13:55:42.106055] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.368 [2024-12-11 13:55:42.163736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:49.368 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.368 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:49.368 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:49.368 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:49.368 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.368 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.368 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:14:49.368 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.368 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.368 [2024-12-11 13:55:42.284990] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.368 malloc0 00:14:49.368 [2024-12-11 13:55:42.316768] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:49.368 [2024-12-11 13:55:42.317173] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:49.368 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.368 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=73640 00:14:49.368 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:49.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:49.368 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 73640 /var/tmp/bdevperf.sock 00:14:49.368 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73640 ']' 00:14:49.368 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:49.368 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.368 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:49.368 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.368 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.368 [2024-12-11 13:55:42.406028] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:49.368 [2024-12-11 13:55:42.406174] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73640 ] 00:14:49.626 [2024-12-11 13:55:42.551694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.626 [2024-12-11 13:55:42.615276] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.883 [2024-12-11 13:55:42.674554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:50.816 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.816 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:50.816 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oO4tMaUNe5 00:14:50.816 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:51.074 [2024-12-11 13:55:44.106955] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:51.332 nvme0n1 00:14:51.332 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:51.332 Running I/O for 1 seconds... 00:14:52.713 3865.00 IOPS, 15.10 MiB/s 00:14:52.713 Latency(us) 00:14:52.713 [2024-12-11T13:55:45.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.713 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:52.713 Verification LBA range: start 0x0 length 0x2000 00:14:52.713 nvme0n1 : 1.02 3913.85 15.29 0.00 0.00 32301.94 2785.28 20494.89 00:14:52.713 [2024-12-11T13:55:45.760Z] =================================================================================================================== 00:14:52.713 [2024-12-11T13:55:45.760Z] Total : 3913.85 15.29 0.00 0.00 32301.94 2785.28 20494.89 00:14:52.713 { 00:14:52.713 "results": [ 00:14:52.713 { 00:14:52.713 "job": "nvme0n1", 00:14:52.713 "core_mask": "0x2", 00:14:52.713 "workload": "verify", 00:14:52.713 "status": "finished", 00:14:52.713 "verify_range": { 00:14:52.713 "start": 0, 00:14:52.713 "length": 8192 00:14:52.713 }, 00:14:52.713 "queue_depth": 128, 00:14:52.713 "io_size": 4096, 00:14:52.713 "runtime": 1.020224, 00:14:52.713 "iops": 3913.846370993037, 00:14:52.713 "mibps": 15.28846238669155, 00:14:52.713 "io_failed": 0, 00:14:52.713 "io_timeout": 0, 00:14:52.713 "avg_latency_us": 32301.943965576123, 00:14:52.713 "min_latency_us": 2785.28, 00:14:52.713 "max_latency_us": 20494.894545454546 00:14:52.713 } 00:14:52.713 ], 00:14:52.713 "core_count": 1 00:14:52.713 } 00:14:52.713 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:14:52.713 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.713 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:52.713 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.713 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:14:52.713 "subsystems": [ 00:14:52.713 { 00:14:52.713 "subsystem": "keyring", 00:14:52.713 "config": [ 00:14:52.713 { 00:14:52.713 "method": "keyring_file_add_key", 00:14:52.713 "params": { 00:14:52.713 "name": "key0", 00:14:52.713 "path": "/tmp/tmp.oO4tMaUNe5" 00:14:52.713 } 00:14:52.713 } 00:14:52.713 ] 00:14:52.713 }, 00:14:52.713 { 00:14:52.713 "subsystem": "iobuf", 00:14:52.713 "config": [ 00:14:52.713 { 00:14:52.713 "method": "iobuf_set_options", 00:14:52.713 "params": { 00:14:52.713 "small_pool_count": 8192, 00:14:52.713 "large_pool_count": 1024, 00:14:52.713 "small_bufsize": 8192, 00:14:52.713 "large_bufsize": 135168, 00:14:52.713 "enable_numa": false 00:14:52.713 } 00:14:52.713 } 00:14:52.713 ] 00:14:52.713 }, 00:14:52.713 { 00:14:52.713 "subsystem": "sock", 00:14:52.713 "config": [ 00:14:52.713 { 00:14:52.713 "method": "sock_set_default_impl", 00:14:52.713 "params": { 00:14:52.713 "impl_name": "uring" 00:14:52.713 } 00:14:52.713 }, 00:14:52.713 { 00:14:52.713 "method": "sock_impl_set_options", 00:14:52.713 "params": { 00:14:52.713 "impl_name": "ssl", 00:14:52.713 "recv_buf_size": 4096, 00:14:52.713 "send_buf_size": 4096, 00:14:52.713 "enable_recv_pipe": true, 00:14:52.713 "enable_quickack": false, 00:14:52.713 "enable_placement_id": 0, 00:14:52.713 "enable_zerocopy_send_server": true, 00:14:52.713 "enable_zerocopy_send_client": false, 00:14:52.713 "zerocopy_threshold": 0, 00:14:52.713 "tls_version": 0, 00:14:52.713 "enable_ktls": false 00:14:52.713 } 00:14:52.713 }, 00:14:52.713 { 00:14:52.713 "method": "sock_impl_set_options", 00:14:52.713 "params": { 00:14:52.713 "impl_name": "posix", 00:14:52.713 "recv_buf_size": 2097152, 00:14:52.713 "send_buf_size": 2097152, 00:14:52.713 "enable_recv_pipe": true, 00:14:52.713 "enable_quickack": false, 00:14:52.713 "enable_placement_id": 0, 00:14:52.713 "enable_zerocopy_send_server": true, 00:14:52.713 "enable_zerocopy_send_client": false, 00:14:52.713 "zerocopy_threshold": 0, 00:14:52.713 "tls_version": 0, 00:14:52.713 "enable_ktls": false 00:14:52.713 } 00:14:52.713 }, 00:14:52.713 { 00:14:52.713 "method": "sock_impl_set_options", 00:14:52.713 "params": { 00:14:52.713 "impl_name": "uring", 00:14:52.713 "recv_buf_size": 2097152, 00:14:52.713 "send_buf_size": 2097152, 00:14:52.713 "enable_recv_pipe": true, 00:14:52.713 "enable_quickack": false, 00:14:52.713 "enable_placement_id": 0, 00:14:52.713 "enable_zerocopy_send_server": false, 00:14:52.713 "enable_zerocopy_send_client": false, 00:14:52.713 "zerocopy_threshold": 0, 00:14:52.713 "tls_version": 0, 00:14:52.713 "enable_ktls": false 00:14:52.713 } 00:14:52.713 } 00:14:52.713 ] 00:14:52.713 }, 00:14:52.713 { 00:14:52.713 "subsystem": "vmd", 00:14:52.713 "config": [] 00:14:52.713 }, 00:14:52.713 { 00:14:52.713 "subsystem": "accel", 00:14:52.713 "config": [ 00:14:52.713 { 00:14:52.713 "method": "accel_set_options", 00:14:52.713 "params": { 00:14:52.713 "small_cache_size": 128, 00:14:52.713 "large_cache_size": 16, 00:14:52.713 "task_count": 2048, 00:14:52.713 "sequence_count": 2048, 00:14:52.713 "buf_count": 2048 00:14:52.713 } 00:14:52.713 } 00:14:52.713 ] 00:14:52.713 }, 00:14:52.713 { 00:14:52.713 "subsystem": "bdev", 00:14:52.713 "config": [ 00:14:52.713 { 00:14:52.713 "method": "bdev_set_options", 00:14:52.713 "params": { 00:14:52.713 "bdev_io_pool_size": 65535, 00:14:52.713 "bdev_io_cache_size": 256, 00:14:52.713 "bdev_auto_examine": true, 00:14:52.713 "iobuf_small_cache_size": 128, 00:14:52.713 "iobuf_large_cache_size": 16 00:14:52.713 } 00:14:52.713 }, 00:14:52.713 { 00:14:52.713 "method": "bdev_raid_set_options", 00:14:52.713 "params": { 00:14:52.713 "process_window_size_kb": 1024, 00:14:52.713 "process_max_bandwidth_mb_sec": 0 00:14:52.713 } 00:14:52.713 }, 00:14:52.713 { 00:14:52.713 "method": "bdev_iscsi_set_options", 00:14:52.713 "params": { 00:14:52.713 "timeout_sec": 30 00:14:52.713 } 00:14:52.713 }, 00:14:52.713 { 00:14:52.713 "method": "bdev_nvme_set_options", 00:14:52.713 "params": { 00:14:52.713 "action_on_timeout": "none", 00:14:52.713 "timeout_us": 0, 00:14:52.713 "timeout_admin_us": 0, 00:14:52.713 "keep_alive_timeout_ms": 10000, 00:14:52.713 "arbitration_burst": 0, 00:14:52.713 "low_priority_weight": 0, 00:14:52.713 "medium_priority_weight": 0, 00:14:52.713 "high_priority_weight": 0, 00:14:52.713 "nvme_adminq_poll_period_us": 10000, 00:14:52.713 "nvme_ioq_poll_period_us": 0, 00:14:52.713 "io_queue_requests": 0, 00:14:52.713 "delay_cmd_submit": true, 00:14:52.713 "transport_retry_count": 4, 00:14:52.713 "bdev_retry_count": 3, 00:14:52.713 "transport_ack_timeout": 0, 00:14:52.713 "ctrlr_loss_timeout_sec": 0, 00:14:52.713 "reconnect_delay_sec": 0, 00:14:52.713 "fast_io_fail_timeout_sec": 0, 00:14:52.713 "disable_auto_failback": false, 00:14:52.713 "generate_uuids": false, 00:14:52.713 "transport_tos": 0, 00:14:52.713 "nvme_error_stat": false, 00:14:52.713 "rdma_srq_size": 0, 00:14:52.713 "io_path_stat": false, 00:14:52.713 "allow_accel_sequence": false, 00:14:52.713 "rdma_max_cq_size": 0, 00:14:52.713 "rdma_cm_event_timeout_ms": 0, 00:14:52.713 "dhchap_digests": [ 00:14:52.713 "sha256", 00:14:52.713 "sha384", 00:14:52.714 "sha512" 00:14:52.714 ], 00:14:52.714 "dhchap_dhgroups": [ 00:14:52.714 "null", 00:14:52.714 "ffdhe2048", 00:14:52.714 "ffdhe3072", 00:14:52.714 "ffdhe4096", 00:14:52.714 "ffdhe6144", 00:14:52.714 "ffdhe8192" 00:14:52.714 ], 00:14:52.714 "rdma_umr_per_io": false 00:14:52.714 } 00:14:52.714 }, 00:14:52.714 { 00:14:52.714 "method": "bdev_nvme_set_hotplug", 00:14:52.714 "params": { 00:14:52.714 "period_us": 100000, 00:14:52.714 "enable": false 00:14:52.714 } 00:14:52.714 }, 00:14:52.714 { 00:14:52.714 "method": "bdev_malloc_create", 00:14:52.714 "params": { 00:14:52.714 "name": "malloc0", 00:14:52.714 "num_blocks": 8192, 00:14:52.714 "block_size": 4096, 00:14:52.714 "physical_block_size": 4096, 00:14:52.714 "uuid": "bb37c9f4-566d-40f7-a869-06dbcbb8d0fb", 00:14:52.714 "optimal_io_boundary": 0, 00:14:52.714 "md_size": 0, 00:14:52.714 "dif_type": 0, 00:14:52.714 "dif_is_head_of_md": false, 00:14:52.714 "dif_pi_format": 0 00:14:52.714 } 00:14:52.714 }, 00:14:52.714 { 00:14:52.714 "method": "bdev_wait_for_examine" 00:14:52.714 } 00:14:52.714 ] 00:14:52.714 }, 00:14:52.714 { 00:14:52.714 "subsystem": "nbd", 00:14:52.714 "config": [] 00:14:52.714 }, 00:14:52.714 { 00:14:52.714 "subsystem": "scheduler", 00:14:52.714 "config": [ 00:14:52.714 { 00:14:52.714 "method": "framework_set_scheduler", 00:14:52.714 "params": { 00:14:52.714 "name": "static" 00:14:52.714 } 00:14:52.714 } 00:14:52.714 ] 00:14:52.714 }, 00:14:52.714 { 00:14:52.714 "subsystem": "nvmf", 00:14:52.714 "config": [ 00:14:52.714 { 00:14:52.714 "method": "nvmf_set_config", 00:14:52.714 "params": { 00:14:52.714 "discovery_filter": "match_any", 00:14:52.714 "admin_cmd_passthru": { 00:14:52.714 "identify_ctrlr": false 00:14:52.714 }, 00:14:52.714 "dhchap_digests": [ 00:14:52.714 "sha256", 00:14:52.714 "sha384", 00:14:52.714 "sha512" 00:14:52.714 ], 00:14:52.714 "dhchap_dhgroups": [ 00:14:52.714 "null", 00:14:52.714 "ffdhe2048", 00:14:52.714 "ffdhe3072", 00:14:52.714 "ffdhe4096", 00:14:52.714 "ffdhe6144", 00:14:52.714 "ffdhe8192" 00:14:52.714 ] 00:14:52.714 } 00:14:52.714 }, 00:14:52.714 { 00:14:52.714 "method": "nvmf_set_max_subsystems", 00:14:52.714 "params": { 00:14:52.714 "max_subsystems": 1024 00:14:52.714 } 00:14:52.714 }, 00:14:52.714 { 00:14:52.714 "method": "nvmf_set_crdt", 00:14:52.714 "params": { 00:14:52.714 "crdt1": 0, 00:14:52.714 "crdt2": 0, 00:14:52.714 "crdt3": 0 00:14:52.714 } 00:14:52.714 }, 00:14:52.714 { 00:14:52.714 "method": "nvmf_create_transport", 00:14:52.714 "params": { 00:14:52.714 "trtype": "TCP", 00:14:52.714 "max_queue_depth": 128, 00:14:52.714 "max_io_qpairs_per_ctrlr": 127, 00:14:52.714 "in_capsule_data_size": 4096, 00:14:52.714 "max_io_size": 131072, 00:14:52.714 "io_unit_size": 131072, 00:14:52.714 "max_aq_depth": 128, 00:14:52.714 "num_shared_buffers": 511, 00:14:52.714 "buf_cache_size": 4294967295, 00:14:52.714 "dif_insert_or_strip": false, 00:14:52.714 "zcopy": false, 00:14:52.714 "c2h_success": false, 00:14:52.714 "sock_priority": 0, 00:14:52.714 "abort_timeout_sec": 1, 00:14:52.714 "ack_timeout": 0, 00:14:52.714 "data_wr_pool_size": 0 00:14:52.714 } 00:14:52.714 }, 00:14:52.714 { 00:14:52.714 "method": "nvmf_create_subsystem", 00:14:52.714 "params": { 00:14:52.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.714 "allow_any_host": false, 00:14:52.714 "serial_number": "00000000000000000000", 00:14:52.714 "model_number": "SPDK bdev Controller", 00:14:52.714 "max_namespaces": 32, 00:14:52.714 "min_cntlid": 1, 00:14:52.714 "max_cntlid": 65519, 00:14:52.714 "ana_reporting": false 00:14:52.714 } 00:14:52.714 }, 00:14:52.714 { 00:14:52.714 "method": "nvmf_subsystem_add_host", 00:14:52.714 "params": { 00:14:52.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.714 "host": "nqn.2016-06.io.spdk:host1", 00:14:52.714 "psk": "key0" 00:14:52.714 } 00:14:52.714 }, 00:14:52.714 { 00:14:52.714 "method": "nvmf_subsystem_add_ns", 00:14:52.714 "params": { 00:14:52.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.714 "namespace": { 00:14:52.714 "nsid": 1, 00:14:52.714 "bdev_name": "malloc0", 00:14:52.714 "nguid": "BB37C9F4566D40F7A86906DBCBB8D0FB", 00:14:52.714 "uuid": "bb37c9f4-566d-40f7-a869-06dbcbb8d0fb", 00:14:52.714 "no_auto_visible": false 00:14:52.714 } 00:14:52.714 } 00:14:52.714 }, 00:14:52.714 { 00:14:52.714 "method": "nvmf_subsystem_add_listener", 00:14:52.714 "params": { 00:14:52.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.714 "listen_address": { 00:14:52.714 "trtype": "TCP", 00:14:52.714 "adrfam": "IPv4", 00:14:52.714 "traddr": "10.0.0.3", 00:14:52.714 "trsvcid": "4420" 00:14:52.714 }, 00:14:52.714 "secure_channel": false, 00:14:52.714 "sock_impl": "ssl" 00:14:52.714 } 00:14:52.714 } 00:14:52.714 ] 00:14:52.714 } 00:14:52.714 ] 00:14:52.714 }' 00:14:52.714 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:52.972 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:14:52.972 "subsystems": [ 00:14:52.972 { 00:14:52.972 "subsystem": "keyring", 00:14:52.972 "config": [ 00:14:52.972 { 00:14:52.972 "method": "keyring_file_add_key", 00:14:52.972 "params": { 00:14:52.972 "name": "key0", 00:14:52.972 "path": "/tmp/tmp.oO4tMaUNe5" 00:14:52.972 } 00:14:52.972 } 00:14:52.972 ] 00:14:52.972 }, 00:14:52.972 { 00:14:52.972 "subsystem": "iobuf", 00:14:52.972 "config": [ 00:14:52.972 { 00:14:52.972 "method": "iobuf_set_options", 00:14:52.972 "params": { 00:14:52.972 "small_pool_count": 8192, 00:14:52.972 "large_pool_count": 1024, 00:14:52.972 "small_bufsize": 8192, 00:14:52.972 "large_bufsize": 135168, 00:14:52.972 "enable_numa": false 00:14:52.972 } 00:14:52.972 } 00:14:52.972 ] 00:14:52.972 }, 00:14:52.972 { 00:14:52.972 "subsystem": "sock", 00:14:52.972 "config": [ 00:14:52.972 { 00:14:52.972 "method": "sock_set_default_impl", 00:14:52.972 "params": { 00:14:52.972 "impl_name": "uring" 00:14:52.972 } 00:14:52.972 }, 00:14:52.972 { 00:14:52.972 "method": "sock_impl_set_options", 00:14:52.972 "params": { 00:14:52.972 "impl_name": "ssl", 00:14:52.972 "recv_buf_size": 4096, 00:14:52.972 "send_buf_size": 4096, 00:14:52.972 "enable_recv_pipe": true, 00:14:52.972 "enable_quickack": false, 00:14:52.972 "enable_placement_id": 0, 00:14:52.972 "enable_zerocopy_send_server": true, 00:14:52.972 "enable_zerocopy_send_client": false, 00:14:52.972 "zerocopy_threshold": 0, 00:14:52.972 "tls_version": 0, 00:14:52.972 "enable_ktls": false 00:14:52.972 } 00:14:52.972 }, 00:14:52.972 { 00:14:52.972 "method": "sock_impl_set_options", 00:14:52.972 "params": { 00:14:52.972 "impl_name": "posix", 00:14:52.972 "recv_buf_size": 2097152, 00:14:52.972 "send_buf_size": 2097152, 00:14:52.972 "enable_recv_pipe": true, 00:14:52.972 "enable_quickack": false, 00:14:52.972 "enable_placement_id": 0, 00:14:52.972 "enable_zerocopy_send_server": true, 00:14:52.972 "enable_zerocopy_send_client": false, 00:14:52.972 "zerocopy_threshold": 0, 00:14:52.972 "tls_version": 0, 00:14:52.972 "enable_ktls": false 00:14:52.972 } 00:14:52.972 }, 00:14:52.972 { 00:14:52.972 "method": "sock_impl_set_options", 00:14:52.972 "params": { 00:14:52.972 "impl_name": "uring", 00:14:52.972 "recv_buf_size": 2097152, 00:14:52.972 "send_buf_size": 2097152, 00:14:52.972 "enable_recv_pipe": true, 00:14:52.972 "enable_quickack": false, 00:14:52.972 "enable_placement_id": 0, 00:14:52.972 "enable_zerocopy_send_server": false, 00:14:52.972 "enable_zerocopy_send_client": false, 00:14:52.972 "zerocopy_threshold": 0, 00:14:52.972 "tls_version": 0, 00:14:52.972 "enable_ktls": false 00:14:52.972 } 00:14:52.972 } 00:14:52.972 ] 00:14:52.972 }, 00:14:52.972 { 00:14:52.972 "subsystem": "vmd", 00:14:52.972 "config": [] 00:14:52.972 }, 00:14:52.972 { 00:14:52.972 "subsystem": "accel", 00:14:52.972 "config": [ 00:14:52.972 { 00:14:52.972 "method": "accel_set_options", 00:14:52.972 "params": { 00:14:52.972 "small_cache_size": 128, 00:14:52.972 "large_cache_size": 16, 00:14:52.972 "task_count": 2048, 00:14:52.972 "sequence_count": 2048, 00:14:52.972 "buf_count": 2048 00:14:52.972 } 00:14:52.972 } 00:14:52.972 ] 00:14:52.972 }, 00:14:52.972 { 00:14:52.972 "subsystem": "bdev", 00:14:52.972 "config": [ 00:14:52.972 { 00:14:52.972 "method": "bdev_set_options", 00:14:52.972 "params": { 00:14:52.972 "bdev_io_pool_size": 65535, 00:14:52.972 "bdev_io_cache_size": 256, 00:14:52.972 "bdev_auto_examine": true, 00:14:52.972 "iobuf_small_cache_size": 128, 00:14:52.972 "iobuf_large_cache_size": 16 00:14:52.972 } 00:14:52.972 }, 00:14:52.972 { 00:14:52.972 "method": "bdev_raid_set_options", 00:14:52.972 "params": { 00:14:52.972 "process_window_size_kb": 1024, 00:14:52.973 "process_max_bandwidth_mb_sec": 0 00:14:52.973 } 00:14:52.973 }, 00:14:52.973 { 00:14:52.973 "method": "bdev_iscsi_set_options", 00:14:52.973 "params": { 00:14:52.973 "timeout_sec": 30 00:14:52.973 } 00:14:52.973 }, 00:14:52.973 { 00:14:52.973 "method": "bdev_nvme_set_options", 00:14:52.973 "params": { 00:14:52.973 "action_on_timeout": "none", 00:14:52.973 "timeout_us": 0, 00:14:52.973 "timeout_admin_us": 0, 00:14:52.973 "keep_alive_timeout_ms": 10000, 00:14:52.973 "arbitration_burst": 0, 00:14:52.973 "low_priority_weight": 0, 00:14:52.973 "medium_priority_weight": 0, 00:14:52.973 "high_priority_weight": 0, 00:14:52.973 "nvme_adminq_poll_period_us": 10000, 00:14:52.973 "nvme_ioq_poll_period_us": 0, 00:14:52.973 "io_queue_requests": 512, 00:14:52.973 "delay_cmd_submit": true, 00:14:52.973 "transport_retry_count": 4, 00:14:52.973 "bdev_retry_count": 3, 00:14:52.973 "transport_ack_timeout": 0, 00:14:52.973 "ctrlr_loss_timeout_sec": 0, 00:14:52.973 "reconnect_delay_sec": 0, 00:14:52.973 "fast_io_fail_timeout_sec": 0, 00:14:52.973 "disable_auto_failback": false, 00:14:52.973 "generate_uuids": false, 00:14:52.973 "transport_tos": 0, 00:14:52.973 "nvme_error_stat": false, 00:14:52.973 "rdma_srq_size": 0, 00:14:52.973 "io_path_stat": false, 00:14:52.973 "allow_accel_sequence": false, 00:14:52.973 "rdma_max_cq_size": 0, 00:14:52.973 "rdma_cm_event_timeout_ms": 0, 00:14:52.973 "dhchap_digests": [ 00:14:52.973 "sha256", 00:14:52.973 "sha384", 00:14:52.973 "sha512" 00:14:52.973 ], 00:14:52.973 "dhchap_dhgroups": [ 00:14:52.973 "null", 00:14:52.973 "ffdhe2048", 00:14:52.973 "ffdhe3072", 00:14:52.973 "ffdhe4096", 00:14:52.973 "ffdhe6144", 00:14:52.973 "ffdhe8192" 00:14:52.973 ], 00:14:52.973 "rdma_umr_per_io": false 00:14:52.973 } 00:14:52.973 }, 00:14:52.973 { 00:14:52.973 "method": "bdev_nvme_attach_controller", 00:14:52.973 "params": { 00:14:52.973 "name": "nvme0", 00:14:52.973 "trtype": "TCP", 00:14:52.973 "adrfam": "IPv4", 00:14:52.973 "traddr": "10.0.0.3", 00:14:52.973 "trsvcid": "4420", 00:14:52.973 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.973 "prchk_reftag": false, 00:14:52.973 "prchk_guard": false, 00:14:52.973 "ctrlr_loss_timeout_sec": 0, 00:14:52.973 "reconnect_delay_sec": 0, 00:14:52.973 "fast_io_fail_timeout_sec": 0, 00:14:52.973 "psk": "key0", 00:14:52.973 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:52.973 "hdgst": false, 00:14:52.973 "ddgst": false, 00:14:52.973 "multipath": "multipath" 00:14:52.973 } 00:14:52.973 }, 00:14:52.973 { 00:14:52.973 "method": "bdev_nvme_set_hotplug", 00:14:52.973 "params": { 00:14:52.973 "period_us": 100000, 00:14:52.973 "enable": false 00:14:52.973 } 00:14:52.973 }, 00:14:52.973 { 00:14:52.973 "method": "bdev_enable_histogram", 00:14:52.973 "params": { 00:14:52.973 "name": "nvme0n1", 00:14:52.973 "enable": true 00:14:52.973 } 00:14:52.973 }, 00:14:52.973 { 00:14:52.973 "method": "bdev_wait_for_examine" 00:14:52.973 } 00:14:52.973 ] 00:14:52.973 }, 00:14:52.973 { 00:14:52.973 "subsystem": "nbd", 00:14:52.973 "config": [] 00:14:52.973 } 00:14:52.973 ] 00:14:52.973 }' 00:14:52.973 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 73640 00:14:52.973 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73640 ']' 00:14:52.973 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73640 00:14:52.973 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:52.973 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.973 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73640 00:14:52.973 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:52.973 killing process with pid 73640 00:14:52.973 Received shutdown signal, test time was about 1.000000 seconds 00:14:52.973 00:14:52.973 Latency(us) 00:14:52.973 [2024-12-11T13:55:46.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.973 [2024-12-11T13:55:46.020Z] =================================================================================================================== 00:14:52.973 [2024-12-11T13:55:46.020Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:52.973 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:52.973 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73640' 00:14:52.973 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73640 00:14:52.973 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73640 00:14:53.231 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 73616 00:14:53.231 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73616 ']' 00:14:53.231 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73616 00:14:53.231 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:53.231 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:53.231 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73616 00:14:53.231 killing process with pid 73616 00:14:53.231 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:53.231 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:53.231 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73616' 00:14:53.231 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73616 00:14:53.231 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73616 00:14:53.489 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:14:53.489 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:53.489 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:53.489 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:14:53.489 "subsystems": [ 00:14:53.489 { 00:14:53.489 "subsystem": "keyring", 00:14:53.489 "config": [ 00:14:53.489 { 00:14:53.489 "method": "keyring_file_add_key", 00:14:53.489 "params": { 00:14:53.489 "name": "key0", 00:14:53.489 "path": "/tmp/tmp.oO4tMaUNe5" 00:14:53.489 } 00:14:53.489 } 00:14:53.489 ] 00:14:53.489 }, 00:14:53.489 { 00:14:53.489 "subsystem": "iobuf", 00:14:53.489 "config": [ 00:14:53.489 { 00:14:53.489 "method": "iobuf_set_options", 00:14:53.489 "params": { 00:14:53.489 "small_pool_count": 8192, 00:14:53.489 "large_pool_count": 1024, 00:14:53.489 "small_bufsize": 8192, 00:14:53.489 "large_bufsize": 135168, 00:14:53.489 "enable_numa": false 00:14:53.489 } 00:14:53.489 } 00:14:53.489 ] 00:14:53.489 }, 00:14:53.489 { 00:14:53.489 "subsystem": "sock", 00:14:53.489 "config": [ 00:14:53.489 { 00:14:53.489 "method": "sock_set_default_impl", 00:14:53.489 "params": { 00:14:53.489 "impl_name": "uring" 00:14:53.489 } 00:14:53.489 }, 00:14:53.489 { 00:14:53.489 "method": "sock_impl_set_options", 00:14:53.489 "params": { 00:14:53.489 "impl_name": "ssl", 00:14:53.489 "recv_buf_size": 4096, 00:14:53.489 "send_buf_size": 4096, 00:14:53.489 "enable_recv_pipe": true, 00:14:53.489 "enable_quickack": false, 00:14:53.489 "enable_placement_id": 0, 00:14:53.489 "enable_zerocopy_send_server": true, 00:14:53.489 "enable_zerocopy_send_client": false, 00:14:53.489 "zerocopy_threshold": 0, 00:14:53.489 "tls_version": 0, 00:14:53.489 "enable_ktls": false 00:14:53.489 } 00:14:53.489 }, 00:14:53.489 { 00:14:53.489 "method": "sock_impl_set_options", 00:14:53.489 "params": { 00:14:53.489 "impl_name": "posix", 00:14:53.489 "recv_buf_size": 2097152, 00:14:53.489 "send_buf_size": 2097152, 00:14:53.489 "enable_recv_pipe": true, 00:14:53.489 "enable_quickack": false, 00:14:53.489 "enable_placement_id": 0, 00:14:53.489 "enable_zerocopy_send_server": true, 00:14:53.489 "enable_zerocopy_send_client": false, 00:14:53.489 "zerocopy_threshold": 0, 00:14:53.489 "tls_version": 0, 00:14:53.489 "enable_ktls": false 00:14:53.489 } 00:14:53.489 }, 00:14:53.489 { 00:14:53.489 "method": "sock_impl_set_options", 00:14:53.489 "params": { 00:14:53.489 "impl_name": "uring", 00:14:53.489 "recv_buf_size": 2097152, 00:14:53.489 "send_buf_size": 2097152, 00:14:53.489 "enable_recv_pipe": true, 00:14:53.489 "enable_quickack": false, 00:14:53.489 "enable_placement_id": 0, 00:14:53.489 "enable_zerocopy_send_server": false, 00:14:53.489 "enable_zerocopy_send_client": false, 00:14:53.489 "zerocopy_threshold": 0, 00:14:53.489 "tls_version": 0, 00:14:53.489 "enable_ktls": false 00:14:53.489 } 00:14:53.489 } 00:14:53.489 ] 00:14:53.489 }, 00:14:53.489 { 00:14:53.489 "subsystem": "vmd", 00:14:53.489 "config": [] 00:14:53.489 }, 00:14:53.489 { 00:14:53.489 "subsystem": "accel", 00:14:53.489 "config": [ 00:14:53.489 { 00:14:53.489 "method": "accel_set_options", 00:14:53.489 "params": { 00:14:53.489 "small_cache_size": 128, 00:14:53.489 "large_cache_size": 16, 00:14:53.489 "task_count": 2048, 00:14:53.489 "sequence_count": 2048, 00:14:53.489 "buf_count": 2048 00:14:53.489 } 00:14:53.489 } 00:14:53.489 ] 00:14:53.489 }, 00:14:53.489 { 00:14:53.489 "subsystem": "bdev", 00:14:53.489 "config": [ 00:14:53.489 { 00:14:53.489 "method": "bdev_set_options", 00:14:53.489 "params": { 00:14:53.489 "bdev_io_pool_size": 65535, 00:14:53.489 "bdev_io_cache_size": 256, 00:14:53.489 "bdev_auto_examine": true, 00:14:53.489 "iobuf_small_cache_size": 128, 00:14:53.489 "iobuf_large_cache_size": 16 00:14:53.489 } 00:14:53.489 }, 00:14:53.489 { 00:14:53.489 "method": "bdev_raid_set_options", 00:14:53.489 "params": { 00:14:53.489 "process_window_size_kb": 1024, 00:14:53.489 "process_max_bandwidth_mb_sec": 0 00:14:53.489 } 00:14:53.489 }, 00:14:53.489 { 00:14:53.489 "method": "bdev_iscsi_set_options", 00:14:53.489 "params": { 00:14:53.489 "timeout_sec": 30 00:14:53.489 } 00:14:53.489 }, 00:14:53.489 { 00:14:53.489 "method": "bdev_nvme_set_options", 00:14:53.489 "params": { 00:14:53.489 "action_on_timeout": "none", 00:14:53.489 "timeout_us": 0, 00:14:53.489 "timeout_admin_us": 0, 00:14:53.489 "keep_alive_timeout_ms": 10000, 00:14:53.489 "arbitration_burst": 0, 00:14:53.489 "low_priority_weight": 0, 00:14:53.489 "medium_priority_weight": 0, 00:14:53.489 "high_priority_weight": 0, 00:14:53.489 "nvme_adminq_poll_period_us": 10000, 00:14:53.489 "nvme_ioq_poll_period_us": 0, 00:14:53.489 "io_queue_requests": 0, 00:14:53.489 "delay_cmd_submit": true, 00:14:53.489 "transport_retry_count": 4, 00:14:53.489 "bdev_retry_count": 3, 00:14:53.489 "transport_ack_timeout": 0, 00:14:53.489 "ctrlr_loss_timeout_sec": 0, 00:14:53.489 "reconnect_delay_sec": 0, 00:14:53.489 "fast_io_fail_timeout_sec": 0, 00:14:53.489 "disable_auto_failback": false, 00:14:53.489 "generate_uuids": false, 00:14:53.489 "transport_tos": 0, 00:14:53.489 "nvme_error_stat": false, 00:14:53.489 "rdma_srq_size": 0, 00:14:53.489 "io_path_stat": false, 00:14:53.489 "allow_accel_sequence": false, 00:14:53.489 "rdma_max_cq_size": 0, 00:14:53.489 "rdma_cm_event_timeout_ms": 0, 00:14:53.489 "dhchap_digests": [ 00:14:53.489 "sha256", 00:14:53.489 "sha384", 00:14:53.489 "sha512" 00:14:53.489 ], 00:14:53.489 "dhchap_dhgroups": [ 00:14:53.489 "null", 00:14:53.489 "ffdhe2048", 00:14:53.489 "ffdhe3072", 00:14:53.489 "ffdhe4096", 00:14:53.489 "ffdhe6144", 00:14:53.489 "ffdhe8192" 00:14:53.489 ], 00:14:53.489 "rdma_umr_per_io": false 00:14:53.489 } 00:14:53.489 }, 00:14:53.489 { 00:14:53.489 "method": "bdev_nvme_set_hotplug", 00:14:53.489 "params": { 00:14:53.489 "period_us": 100000, 00:14:53.489 "enable": false 00:14:53.489 } 00:14:53.489 }, 00:14:53.489 { 00:14:53.489 "method": "bdev_malloc_create", 00:14:53.489 "params": { 00:14:53.489 "name": "malloc0", 00:14:53.489 "num_blocks": 8192, 00:14:53.489 "block_size": 4096, 00:14:53.489 "physical_block_size": 4096, 00:14:53.489 "uuid": "bb37c9f4-566d-40f7-a869-06dbcbb8d0fb", 00:14:53.489 "optimal_io_boundary": 0, 00:14:53.489 "md_size": 0, 00:14:53.489 "dif_type": 0, 00:14:53.489 "dif_is_head_of_md": false, 00:14:53.489 "dif_pi_format": 0 00:14:53.489 } 00:14:53.489 }, 00:14:53.489 { 00:14:53.489 "method": "bdev_wait_for_examine" 00:14:53.489 } 00:14:53.489 ] 00:14:53.489 }, 00:14:53.489 { 00:14:53.489 "subsystem": "nbd", 00:14:53.489 "config": [] 00:14:53.489 }, 00:14:53.489 { 00:14:53.489 "subsystem": "scheduler", 00:14:53.489 "config": [ 00:14:53.489 { 00:14:53.489 "method": "framework_set_scheduler", 00:14:53.489 "params": { 00:14:53.489 "name": "static" 00:14:53.489 } 00:14:53.489 } 00:14:53.489 ] 00:14:53.489 }, 00:14:53.489 { 00:14:53.489 "subsystem": "nvmf", 00:14:53.489 "config": [ 00:14:53.489 { 00:14:53.489 "method": "nvmf_set_config", 00:14:53.489 "params": { 00:14:53.489 "discovery_filter": "match_any", 00:14:53.489 "admin_cmd_passthru": { 00:14:53.489 "identify_ctrlr": false 00:14:53.490 }, 00:14:53.490 "dhchap_digests": [ 00:14:53.490 "sha256", 00:14:53.490 "sha384", 00:14:53.490 "sha512" 00:14:53.490 ], 00:14:53.490 "dhchap_dhgroups": [ 00:14:53.490 "null", 00:14:53.490 "ffdhe2048", 00:14:53.490 "ffdhe3072", 00:14:53.490 "ffdhe4096", 00:14:53.490 "ffdhe6144", 00:14:53.490 "ffdhe8192" 00:14:53.490 ] 00:14:53.490 } 00:14:53.490 }, 00:14:53.490 { 00:14:53.490 "method": "nvmf_set_max_subsystems", 00:14:53.490 "params": { 00:14:53.490 "max_subsystems": 1024 00:14:53.490 } 00:14:53.490 }, 00:14:53.490 { 00:14:53.490 "method": "nvmf_set_crdt", 00:14:53.490 "params": { 00:14:53.490 "crdt1": 0, 00:14:53.490 "crdt2": 0, 00:14:53.490 "crdt3": 0 00:14:53.490 } 00:14:53.490 }, 00:14:53.490 { 00:14:53.490 "method": "nvmf_create_transport", 00:14:53.490 "params": { 00:14:53.490 "trtype": "TCP", 00:14:53.490 "max_queue_depth": 128, 00:14:53.490 "max_io_qpairs_per_ctrlr": 127, 00:14:53.490 "in_capsule_data_size": 4096, 00:14:53.490 "max_io_size": 131072, 00:14:53.490 "io_unit_size": 131072, 00:14:53.490 "max_aq_depth": 128, 00:14:53.490 "num_shared_buffers": 511, 00:14:53.490 "buf_cache_size": 4294967295, 00:14:53.490 "dif_insert_or_strip": false, 00:14:53.490 "zcopy": false, 00:14:53.490 "c2h_success": false, 00:14:53.490 "sock_priority": 0, 00:14:53.490 "abort_timeout_sec": 1, 00:14:53.490 "ack_timeout": 0, 00:14:53.490 "data_wr_pool_size": 0 00:14:53.490 } 00:14:53.490 }, 00:14:53.490 { 00:14:53.490 "method": "nvmf_create_subsystem", 00:14:53.490 "params": { 00:14:53.490 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.490 "allow_any_host": false, 00:14:53.490 "serial_number": "00000000000000000000", 00:14:53.490 "model_number": "SPDK bdev Controller", 00:14:53.490 "max_namespaces": 32, 00:14:53.490 "min_cntlid": 1, 00:14:53.490 "max_cntlid": 65519, 00:14:53.490 "ana_reporting": false 00:14:53.490 } 00:14:53.490 }, 00:14:53.490 { 00:14:53.490 "method": "nvmf_subsystem_add_host", 00:14:53.490 "params": { 00:14:53.490 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.490 "host": "nqn.2016-06.io.spdk:host1", 00:14:53.490 "psk": "key0" 00:14:53.490 } 00:14:53.490 }, 00:14:53.490 { 00:14:53.490 "method": "nvmf_subsystem_add_ns", 00:14:53.490 "params": { 00:14:53.490 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.490 "namespace": { 00:14:53.490 "nsid": 1, 00:14:53.490 "bdev_name": "malloc0", 00:14:53.490 "nguid": "BB37C9F4566D40F7A86906DBCBB8D0FB", 00:14:53.490 "uuid": "bb37c9f4-566d-40f7-a869-06dbcbb8d0fb", 00:14:53.490 "no_auto_visible": false 00:14:53.490 } 00:14:53.490 } 00:14:53.490 }, 00:14:53.490 { 00:14:53.490 "method": "nvmf_subsystem_add_listener", 00:14:53.490 "params": { 00:14:53.490 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.490 "listen_address": { 00:14:53.490 "trtype": "TCP", 00:14:53.490 "adrfam": "IPv4", 00:14:53.490 "traddr": "10.0.0.3", 00:14:53.490 "trsvcid": "4420" 00:14:53.490 }, 00:14:53.490 "secure_channel": false, 00:14:53.490 "sock_impl": "ssl" 00:14:53.490 } 00:14:53.490 } 00:14:53.490 ] 00:14:53.490 } 00:14:53.490 ] 00:14:53.490 }' 00:14:53.490 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:53.490 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:53.490 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=73701 00:14:53.490 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 73701 00:14:53.490 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73701 ']' 00:14:53.490 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.490 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.490 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.490 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.490 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:53.490 [2024-12-11 13:55:46.489867] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:53.490 [2024-12-11 13:55:46.489984] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.780 [2024-12-11 13:55:46.641038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.780 [2024-12-11 13:55:46.707772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.780 [2024-12-11 13:55:46.707831] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.780 [2024-12-11 13:55:46.707858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.780 [2024-12-11 13:55:46.707866] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.780 [2024-12-11 13:55:46.707873] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.780 [2024-12-11 13:55:46.708356] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.037 [2024-12-11 13:55:46.883234] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:54.037 [2024-12-11 13:55:46.970926] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.037 [2024-12-11 13:55:47.002900] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:54.037 [2024-12-11 13:55:47.003196] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:54.604 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.604 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:54.604 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:54.604 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:54.604 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:54.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:54.604 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.604 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=73733 00:14:54.604 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 73733 /var/tmp/bdevperf.sock 00:14:54.604 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73733 ']' 00:14:54.604 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:54.604 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.604 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:54.604 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.604 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:54.604 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:54.604 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:14:54.604 "subsystems": [ 00:14:54.604 { 00:14:54.604 "subsystem": "keyring", 00:14:54.604 "config": [ 00:14:54.604 { 00:14:54.604 "method": "keyring_file_add_key", 00:14:54.604 "params": { 00:14:54.604 "name": "key0", 00:14:54.604 "path": "/tmp/tmp.oO4tMaUNe5" 00:14:54.604 } 00:14:54.604 } 00:14:54.604 ] 00:14:54.604 }, 00:14:54.604 { 00:14:54.604 "subsystem": "iobuf", 00:14:54.604 "config": [ 00:14:54.604 { 00:14:54.604 "method": "iobuf_set_options", 00:14:54.604 "params": { 00:14:54.604 "small_pool_count": 8192, 00:14:54.604 "large_pool_count": 1024, 00:14:54.604 "small_bufsize": 8192, 00:14:54.604 "large_bufsize": 135168, 00:14:54.604 "enable_numa": false 00:14:54.604 } 00:14:54.604 } 00:14:54.604 ] 00:14:54.604 }, 00:14:54.604 { 00:14:54.604 "subsystem": "sock", 00:14:54.604 "config": [ 00:14:54.604 { 00:14:54.604 "method": "sock_set_default_impl", 00:14:54.604 "params": { 00:14:54.604 "impl_name": "uring" 00:14:54.604 } 00:14:54.604 }, 00:14:54.604 { 00:14:54.604 "method": "sock_impl_set_options", 00:14:54.604 "params": { 00:14:54.604 "impl_name": "ssl", 00:14:54.604 "recv_buf_size": 4096, 00:14:54.604 "send_buf_size": 4096, 00:14:54.604 "enable_recv_pipe": true, 00:14:54.604 "enable_quickack": false, 00:14:54.604 "enable_placement_id": 0, 00:14:54.604 "enable_zerocopy_send_server": true, 00:14:54.604 "enable_zerocopy_send_client": false, 00:14:54.604 "zerocopy_threshold": 0, 00:14:54.604 "tls_version": 0, 00:14:54.604 "enable_ktls": false 00:14:54.604 } 00:14:54.604 }, 00:14:54.604 { 00:14:54.604 "method": "sock_impl_set_options", 00:14:54.604 "params": { 00:14:54.604 "impl_name": "posix", 00:14:54.604 "recv_buf_size": 2097152, 00:14:54.604 "send_buf_size": 2097152, 00:14:54.604 "enable_recv_pipe": true, 00:14:54.604 "enable_quickack": false, 00:14:54.604 "enable_placement_id": 0, 00:14:54.604 "enable_zerocopy_send_server": true, 00:14:54.604 "enable_zerocopy_send_client": false, 00:14:54.604 "zerocopy_threshold": 0, 00:14:54.604 "tls_version": 0, 00:14:54.604 "enable_ktls": false 00:14:54.604 } 00:14:54.604 }, 00:14:54.604 { 00:14:54.604 "method": "sock_impl_set_options", 00:14:54.604 "params": { 00:14:54.604 "impl_name": "uring", 00:14:54.604 "recv_buf_size": 2097152, 00:14:54.604 "send_buf_size": 2097152, 00:14:54.604 "enable_recv_pipe": true, 00:14:54.604 "enable_quickack": false, 00:14:54.604 "enable_placement_id": 0, 00:14:54.604 "enable_zerocopy_send_server": false, 00:14:54.604 "enable_zerocopy_send_client": false, 00:14:54.604 "zerocopy_threshold": 0, 00:14:54.604 "tls_version": 0, 00:14:54.604 "enable_ktls": false 00:14:54.604 } 00:14:54.604 } 00:14:54.604 ] 00:14:54.604 }, 00:14:54.604 { 00:14:54.604 "subsystem": "vmd", 00:14:54.604 "config": [] 00:14:54.604 }, 00:14:54.604 { 00:14:54.604 "subsystem": "accel", 00:14:54.604 "config": [ 00:14:54.604 { 00:14:54.604 "method": "accel_set_options", 00:14:54.604 "params": { 00:14:54.604 "small_cache_size": 128, 00:14:54.604 "large_cache_size": 16, 00:14:54.604 "task_count": 2048, 00:14:54.604 "sequence_count": 2048, 00:14:54.604 "buf_count": 2048 00:14:54.604 } 00:14:54.604 } 00:14:54.604 ] 00:14:54.604 }, 00:14:54.604 { 00:14:54.604 "subsystem": "bdev", 00:14:54.604 "config": [ 00:14:54.604 { 00:14:54.604 "method": "bdev_set_options", 00:14:54.604 "params": { 00:14:54.604 "bdev_io_pool_size": 65535, 00:14:54.604 "bdev_io_cache_size": 256, 00:14:54.604 "bdev_auto_examine": true, 00:14:54.604 "iobuf_small_cache_size": 128, 00:14:54.604 "iobuf_large_cache_size": 16 00:14:54.604 } 00:14:54.604 }, 00:14:54.604 { 00:14:54.604 "method": "bdev_raid_set_options", 00:14:54.604 "params": { 00:14:54.604 "process_window_size_kb": 1024, 00:14:54.604 "process_max_bandwidth_mb_sec": 0 00:14:54.604 } 00:14:54.604 }, 00:14:54.604 { 00:14:54.604 "method": "bdev_iscsi_set_options", 00:14:54.604 "params": { 00:14:54.604 "timeout_sec": 30 00:14:54.604 } 00:14:54.604 }, 00:14:54.604 { 00:14:54.604 "method": "bdev_nvme_set_options", 00:14:54.604 "params": { 00:14:54.604 "action_on_timeout": "none", 00:14:54.604 "timeout_us": 0, 00:14:54.604 "timeout_admin_us": 0, 00:14:54.604 "keep_alive_timeout_ms": 10000, 00:14:54.604 "arbitration_burst": 0, 00:14:54.604 "low_priority_weight": 0, 00:14:54.604 "medium_priority_weight": 0, 00:14:54.604 "high_priority_weight": 0, 00:14:54.604 "nvme_adminq_poll_period_us": 10000, 00:14:54.604 "nvme_ioq_poll_period_us": 0, 00:14:54.604 "io_queue_requests": 512, 00:14:54.604 "delay_cmd_submit": true, 00:14:54.604 "transport_retry_count": 4, 00:14:54.604 "bdev_retry_count": 3, 00:14:54.604 "transport_ack_timeout": 0, 00:14:54.604 "ctrlr_loss_timeout_sec": 0, 00:14:54.604 "reconnect_delay_sec": 0, 00:14:54.604 "fast_io_fail_timeout_sec": 0, 00:14:54.604 "disable_auto_failback": false, 00:14:54.604 "generate_uuids": false, 00:14:54.604 "transport_tos": 0, 00:14:54.604 "nvme_error_stat": false, 00:14:54.604 "rdma_srq_size": 0, 00:14:54.604 "io_path_stat": false, 00:14:54.604 "allow_accel_sequence": false, 00:14:54.604 "rdma_max_cq_size": 0, 00:14:54.604 "rdma_cm_event_timeout_ms": 0, 00:14:54.604 "dhchap_digests": [ 00:14:54.604 "sha256", 00:14:54.604 "sha384", 00:14:54.604 "sha512" 00:14:54.604 ], 00:14:54.604 "dhchap_dhgroups": [ 00:14:54.604 "null", 00:14:54.604 "ffdhe2048", 00:14:54.604 "ffdhe3072", 00:14:54.604 "ffdhe4096", 00:14:54.604 "ffdhe6144", 00:14:54.604 "ffdhe8192" 00:14:54.604 ], 00:14:54.604 "rdma_umr_per_io": false 00:14:54.604 } 00:14:54.604 }, 00:14:54.604 { 00:14:54.604 "method": "bdev_nvme_attach_controller", 00:14:54.604 "params": { 00:14:54.604 "name": "nvme0", 00:14:54.604 "trtype": "TCP", 00:14:54.604 "adrfam": "IPv4", 00:14:54.604 "traddr": "10.0.0.3", 00:14:54.604 "trsvcid": "4420", 00:14:54.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.604 "prchk_reftag": false, 00:14:54.604 "prchk_guard": false, 00:14:54.604 "ctrlr_loss_timeout_sec": 0, 00:14:54.604 "reconnect_delay_sec": 0, 00:14:54.604 "fast_io_fail_timeout_sec": 0, 00:14:54.604 "psk": "key0", 00:14:54.604 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:54.605 "hdgst": false, 00:14:54.605 "ddgst": false, 00:14:54.605 "multipath": "multipath" 00:14:54.605 } 00:14:54.605 }, 00:14:54.605 { 00:14:54.605 "method": "bdev_nvme_set_hotplug", 00:14:54.605 "params": { 00:14:54.605 "period_us": 100000, 00:14:54.605 "enable": false 00:14:54.605 } 00:14:54.605 }, 00:14:54.605 { 00:14:54.605 "method": "bdev_enable_histogram", 00:14:54.605 "params": { 00:14:54.605 "name": "nvme0n1", 00:14:54.605 "enable": true 00:14:54.605 } 00:14:54.605 }, 00:14:54.605 { 00:14:54.605 "method": "bdev_wait_for_examine" 00:14:54.605 } 00:14:54.605 ] 00:14:54.605 }, 00:14:54.605 { 00:14:54.605 "subsystem": "nbd", 00:14:54.605 "config": [] 00:14:54.605 } 00:14:54.605 ] 00:14:54.605 }' 00:14:54.605 [2024-12-11 13:55:47.618357] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:54.605 [2024-12-11 13:55:47.618792] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73733 ] 00:14:54.863 [2024-12-11 13:55:47.773305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.863 [2024-12-11 13:55:47.849005] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.121 [2024-12-11 13:55:47.995794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:55.121 [2024-12-11 13:55:48.057233] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:55.685 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:55.685 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:55.685 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:14:55.685 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:56.251 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.251 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:56.251 Running I/O for 1 seconds... 00:14:57.185 3939.00 IOPS, 15.39 MiB/s 00:14:57.185 Latency(us) 00:14:57.185 [2024-12-11T13:55:50.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.185 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:57.185 Verification LBA range: start 0x0 length 0x2000 00:14:57.185 nvme0n1 : 1.02 4003.48 15.64 0.00 0.00 31706.83 5123.72 24546.21 00:14:57.185 [2024-12-11T13:55:50.232Z] =================================================================================================================== 00:14:57.185 [2024-12-11T13:55:50.232Z] Total : 4003.48 15.64 0.00 0.00 31706.83 5123.72 24546.21 00:14:57.185 { 00:14:57.185 "results": [ 00:14:57.185 { 00:14:57.185 "job": "nvme0n1", 00:14:57.185 "core_mask": "0x2", 00:14:57.185 "workload": "verify", 00:14:57.185 "status": "finished", 00:14:57.185 "verify_range": { 00:14:57.185 "start": 0, 00:14:57.185 "length": 8192 00:14:57.185 }, 00:14:57.185 "queue_depth": 128, 00:14:57.185 "io_size": 4096, 00:14:57.185 "runtime": 1.015867, 00:14:57.185 "iops": 4003.4768330893708, 00:14:57.185 "mibps": 15.638581379255355, 00:14:57.185 "io_failed": 0, 00:14:57.185 "io_timeout": 0, 00:14:57.185 "avg_latency_us": 31706.828642063614, 00:14:57.185 "min_latency_us": 5123.723636363637, 00:14:57.185 "max_latency_us": 24546.21090909091 00:14:57.185 } 00:14:57.185 ], 00:14:57.185 "core_count": 1 00:14:57.185 } 00:14:57.185 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:14:57.185 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:14:57.185 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:57.185 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:14:57.185 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:14:57.185 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:57.185 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:57.185 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:57.185 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:57.185 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:57.185 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:57.185 nvmf_trace.0 00:14:57.443 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:14:57.443 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 73733 00:14:57.443 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73733 ']' 00:14:57.443 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73733 00:14:57.443 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:57.443 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:57.443 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73733 00:14:57.443 killing process with pid 73733 00:14:57.443 Received shutdown signal, test time was about 1.000000 seconds 00:14:57.443 00:14:57.443 Latency(us) 00:14:57.443 [2024-12-11T13:55:50.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.443 [2024-12-11T13:55:50.490Z] =================================================================================================================== 00:14:57.443 [2024-12-11T13:55:50.490Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:57.443 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:57.443 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:57.443 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73733' 00:14:57.443 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73733 00:14:57.443 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73733 00:14:57.700 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:57.701 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:57.701 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:14:57.701 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:57.701 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:14:57.701 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:57.701 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:57.701 rmmod nvme_tcp 00:14:57.701 rmmod nvme_fabrics 00:14:57.701 rmmod nvme_keyring 00:14:57.701 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:57.701 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:14:57.701 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:14:57.701 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 73701 ']' 00:14:57.701 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 73701 00:14:57.701 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73701 ']' 00:14:57.701 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73701 00:14:57.701 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:57.701 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:57.701 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73701 00:14:57.701 killing process with pid 73701 00:14:57.701 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:57.701 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:57.701 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73701' 00:14:57.701 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73701 00:14:57.701 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73701 00:14:57.959 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:57.959 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:57.959 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:57.959 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:14:57.959 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:14:57.959 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:57.959 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:14:57.959 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:57.959 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:57.959 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:57.959 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:57.959 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:57.959 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:57.959 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:57.959 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:57.959 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:57.959 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:57.959 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:58.216 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:58.216 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:58.217 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:58.217 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:58.217 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:58.217 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.217 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:58.217 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.217 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:14:58.217 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.cu2lpdmxYW /tmp/tmp.UHnatT9J3Z /tmp/tmp.oO4tMaUNe5 00:14:58.217 00:14:58.217 real 1m28.410s 00:14:58.217 user 2m24.070s 00:14:58.217 sys 0m27.865s 00:14:58.217 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.217 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:58.217 ************************************ 00:14:58.217 END TEST nvmf_tls 00:14:58.217 ************************************ 00:14:58.217 13:55:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:58.217 13:55:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:58.217 13:55:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.217 13:55:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:58.217 ************************************ 00:14:58.217 START TEST nvmf_fips 00:14:58.217 ************************************ 00:14:58.217 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:58.476 * Looking for test storage... 00:14:58.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:58.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.476 --rc genhtml_branch_coverage=1 00:14:58.476 --rc genhtml_function_coverage=1 00:14:58.476 --rc genhtml_legend=1 00:14:58.476 --rc geninfo_all_blocks=1 00:14:58.476 --rc geninfo_unexecuted_blocks=1 00:14:58.476 00:14:58.476 ' 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:58.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.476 --rc genhtml_branch_coverage=1 00:14:58.476 --rc genhtml_function_coverage=1 00:14:58.476 --rc genhtml_legend=1 00:14:58.476 --rc geninfo_all_blocks=1 00:14:58.476 --rc geninfo_unexecuted_blocks=1 00:14:58.476 00:14:58.476 ' 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:58.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.476 --rc genhtml_branch_coverage=1 00:14:58.476 --rc genhtml_function_coverage=1 00:14:58.476 --rc genhtml_legend=1 00:14:58.476 --rc geninfo_all_blocks=1 00:14:58.476 --rc geninfo_unexecuted_blocks=1 00:14:58.476 00:14:58.476 ' 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:58.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.476 --rc genhtml_branch_coverage=1 00:14:58.476 --rc genhtml_function_coverage=1 00:14:58.476 --rc genhtml_legend=1 00:14:58.476 --rc geninfo_all_blocks=1 00:14:58.476 --rc geninfo_unexecuted_blocks=1 00:14:58.476 00:14:58.476 ' 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:58.476 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:58.476 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:14:58.477 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:14:58.736 Error setting digest 00:14:58.736 40E2D646427F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:14:58.736 40E2D646427F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:58.736 Cannot find device "nvmf_init_br" 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:58.736 Cannot find device "nvmf_init_br2" 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:58.736 Cannot find device "nvmf_tgt_br" 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:58.736 Cannot find device "nvmf_tgt_br2" 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:58.736 Cannot find device "nvmf_init_br" 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:58.736 Cannot find device "nvmf_init_br2" 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:58.736 Cannot find device "nvmf_tgt_br" 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:58.736 Cannot find device "nvmf_tgt_br2" 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:58.736 Cannot find device "nvmf_br" 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:58.736 Cannot find device "nvmf_init_if" 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:58.736 Cannot find device "nvmf_init_if2" 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:58.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:58.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:58.736 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:58.995 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:58.995 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:14:58.995 00:14:58.995 --- 10.0.0.3 ping statistics --- 00:14:58.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.995 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:58.995 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:58.995 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:58.996 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:14:58.996 00:14:58.996 --- 10.0.0.4 ping statistics --- 00:14:58.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.996 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:58.996 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:58.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:14:58.996 00:14:58.996 --- 10.0.0.1 ping statistics --- 00:14:58.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.996 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:14:58.996 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:58.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:14:58.996 00:14:58.996 --- 10.0.0.2 ping statistics --- 00:14:58.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.996 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:58.996 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.996 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:14:58.996 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:58.996 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.996 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:58.996 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:58.996 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.996 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:58.996 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:58.996 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:14:58.996 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:58.996 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:58.996 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:58.996 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=74067 00:14:58.996 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:58.996 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 74067 00:14:58.996 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 74067 ']' 00:14:58.996 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.996 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.996 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.254 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.254 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:59.254 [2024-12-11 13:55:52.135877] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:59.254 [2024-12-11 13:55:52.135991] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.254 [2024-12-11 13:55:52.291901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.512 [2024-12-11 13:55:52.366779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.512 [2024-12-11 13:55:52.366849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.512 [2024-12-11 13:55:52.366862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.512 [2024-12-11 13:55:52.366873] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.512 [2024-12-11 13:55:52.366882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.512 [2024-12-11 13:55:52.367411] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.512 [2024-12-11 13:55:52.432263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:00.447 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.447 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:00.447 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:00.447 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:00.447 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:00.447 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.447 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:00.447 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:00.447 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:00.447 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Qhd 00:15:00.447 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:00.447 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Qhd 00:15:00.447 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Qhd 00:15:00.447 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Qhd 00:15:00.447 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:00.705 [2024-12-11 13:55:53.539625] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.705 [2024-12-11 13:55:53.555575] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:00.705 [2024-12-11 13:55:53.555827] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:00.705 malloc0 00:15:00.705 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:00.705 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=74103 00:15:00.705 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:00.705 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 74103 /var/tmp/bdevperf.sock 00:15:00.705 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 74103 ']' 00:15:00.705 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:00.705 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:00.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:00.705 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:00.705 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:00.705 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:00.705 [2024-12-11 13:55:53.720000] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:15:00.705 [2024-12-11 13:55:53.720079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74103 ] 00:15:00.965 [2024-12-11 13:55:53.866765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.965 [2024-12-11 13:55:53.927984] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.965 [2024-12-11 13:55:53.986411] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:01.901 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:01.901 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:01.901 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Qhd 00:15:02.160 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:02.420 [2024-12-11 13:55:55.241084] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:02.420 TLSTESTn1 00:15:02.420 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:02.677 Running I/O for 10 seconds... 00:15:04.545 3795.00 IOPS, 14.82 MiB/s [2024-12-11T13:55:58.525Z] 3840.00 IOPS, 15.00 MiB/s [2024-12-11T13:55:59.897Z] 3902.00 IOPS, 15.24 MiB/s [2024-12-11T13:56:00.829Z] 3886.25 IOPS, 15.18 MiB/s [2024-12-11T13:56:01.763Z] 3869.20 IOPS, 15.11 MiB/s [2024-12-11T13:56:02.698Z] 3859.17 IOPS, 15.07 MiB/s [2024-12-11T13:56:03.632Z] 3840.71 IOPS, 15.00 MiB/s [2024-12-11T13:56:04.568Z] 3831.38 IOPS, 14.97 MiB/s [2024-12-11T13:56:05.502Z] 3837.78 IOPS, 14.99 MiB/s [2024-12-11T13:56:05.502Z] 3839.40 IOPS, 15.00 MiB/s 00:15:12.455 Latency(us) 00:15:12.455 [2024-12-11T13:56:05.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.455 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:12.455 Verification LBA range: start 0x0 length 0x2000 00:15:12.455 TLSTESTn1 : 10.03 3840.74 15.00 0.00 0.00 33257.20 7626.01 23473.80 00:15:12.455 [2024-12-11T13:56:05.502Z] =================================================================================================================== 00:15:12.455 [2024-12-11T13:56:05.502Z] Total : 3840.74 15.00 0.00 0.00 33257.20 7626.01 23473.80 00:15:12.455 { 00:15:12.455 "results": [ 00:15:12.455 { 00:15:12.455 "job": "TLSTESTn1", 00:15:12.455 "core_mask": "0x4", 00:15:12.455 "workload": "verify", 00:15:12.455 "status": "finished", 00:15:12.455 "verify_range": { 00:15:12.455 "start": 0, 00:15:12.455 "length": 8192 00:15:12.455 }, 00:15:12.455 "queue_depth": 128, 00:15:12.455 "io_size": 4096, 00:15:12.455 "runtime": 10.02957, 00:15:12.455 "iops": 3840.742923176168, 00:15:12.455 "mibps": 15.002902043656906, 00:15:12.455 "io_failed": 0, 00:15:12.455 "io_timeout": 0, 00:15:12.455 "avg_latency_us": 33257.200424231414, 00:15:12.455 "min_latency_us": 7626.007272727273, 00:15:12.455 "max_latency_us": 23473.803636363635 00:15:12.455 } 00:15:12.455 ], 00:15:12.455 "core_count": 1 00:15:12.455 } 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:12.713 nvmf_trace.0 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 74103 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 74103 ']' 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 74103 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74103 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:12.713 killing process with pid 74103 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74103' 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 74103 00:15:12.713 Received shutdown signal, test time was about 10.000000 seconds 00:15:12.713 00:15:12.713 Latency(us) 00:15:12.713 [2024-12-11T13:56:05.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.713 [2024-12-11T13:56:05.760Z] =================================================================================================================== 00:15:12.713 [2024-12-11T13:56:05.760Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:12.713 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 74103 00:15:12.972 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:12.972 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:12.972 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:12.972 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:12.972 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:12.972 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:12.972 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:12.972 rmmod nvme_tcp 00:15:12.972 rmmod nvme_fabrics 00:15:12.972 rmmod nvme_keyring 00:15:12.972 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:12.972 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:12.972 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:12.972 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 74067 ']' 00:15:12.972 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 74067 00:15:12.972 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 74067 ']' 00:15:12.972 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 74067 00:15:12.972 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:12.972 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.972 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74067 00:15:12.972 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:12.972 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:12.972 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74067' 00:15:12.972 killing process with pid 74067 00:15:12.972 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 74067 00:15:12.972 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 74067 00:15:13.230 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:13.230 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:13.230 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:13.230 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:13.230 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:13.230 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:15:13.231 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:15:13.231 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:13.231 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:13.231 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:13.231 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:13.231 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:13.489 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:13.489 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:13.489 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:13.489 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:13.489 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:13.489 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:13.489 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:13.489 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:13.489 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:13.489 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:13.489 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:13.489 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.489 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:13.489 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.489 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:13.489 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Qhd 00:15:13.489 00:15:13.489 real 0m15.287s 00:15:13.489 user 0m21.486s 00:15:13.489 sys 0m5.762s 00:15:13.489 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.489 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:13.489 ************************************ 00:15:13.489 END TEST nvmf_fips 00:15:13.490 ************************************ 00:15:13.490 13:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:13.490 13:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:13.490 13:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.490 13:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:13.749 ************************************ 00:15:13.749 START TEST nvmf_control_msg_list 00:15:13.749 ************************************ 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:13.749 * Looking for test storage... 00:15:13.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:13.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.749 --rc genhtml_branch_coverage=1 00:15:13.749 --rc genhtml_function_coverage=1 00:15:13.749 --rc genhtml_legend=1 00:15:13.749 --rc geninfo_all_blocks=1 00:15:13.749 --rc geninfo_unexecuted_blocks=1 00:15:13.749 00:15:13.749 ' 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:13.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.749 --rc genhtml_branch_coverage=1 00:15:13.749 --rc genhtml_function_coverage=1 00:15:13.749 --rc genhtml_legend=1 00:15:13.749 --rc geninfo_all_blocks=1 00:15:13.749 --rc geninfo_unexecuted_blocks=1 00:15:13.749 00:15:13.749 ' 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:13.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.749 --rc genhtml_branch_coverage=1 00:15:13.749 --rc genhtml_function_coverage=1 00:15:13.749 --rc genhtml_legend=1 00:15:13.749 --rc geninfo_all_blocks=1 00:15:13.749 --rc geninfo_unexecuted_blocks=1 00:15:13.749 00:15:13.749 ' 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:13.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.749 --rc genhtml_branch_coverage=1 00:15:13.749 --rc genhtml_function_coverage=1 00:15:13.749 --rc genhtml_legend=1 00:15:13.749 --rc geninfo_all_blocks=1 00:15:13.749 --rc geninfo_unexecuted_blocks=1 00:15:13.749 00:15:13.749 ' 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.749 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:13.750 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:13.750 Cannot find device "nvmf_init_br" 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:13.750 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:14.009 Cannot find device "nvmf_init_br2" 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:14.009 Cannot find device "nvmf_tgt_br" 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:14.009 Cannot find device "nvmf_tgt_br2" 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:14.009 Cannot find device "nvmf_init_br" 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:14.009 Cannot find device "nvmf_init_br2" 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:14.009 Cannot find device "nvmf_tgt_br" 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:14.009 Cannot find device "nvmf_tgt_br2" 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:14.009 Cannot find device "nvmf_br" 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:14.009 Cannot find device "nvmf_init_if" 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:14.009 Cannot find device "nvmf_init_if2" 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:14.009 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:14.009 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:14.009 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:14.009 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:14.009 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:14.009 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:14.009 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:14.268 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:14.268 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:15:14.268 00:15:14.268 --- 10.0.0.3 ping statistics --- 00:15:14.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.268 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:14.268 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:14.268 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:15:14.268 00:15:14.268 --- 10.0.0.4 ping statistics --- 00:15:14.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.268 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:14.268 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:14.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:14.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:15:14.269 00:15:14.269 --- 10.0.0.1 ping statistics --- 00:15:14.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.269 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:14.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:14.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:15:14.269 00:15:14.269 --- 10.0.0.2 ping statistics --- 00:15:14.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.269 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=74495 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 74495 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 74495 ']' 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.269 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:14.269 [2024-12-11 13:56:07.296368] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:15:14.269 [2024-12-11 13:56:07.296474] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.527 [2024-12-11 13:56:07.455833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.527 [2024-12-11 13:56:07.523813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.527 [2024-12-11 13:56:07.523879] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.527 [2024-12-11 13:56:07.523893] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.527 [2024-12-11 13:56:07.523904] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.527 [2024-12-11 13:56:07.523914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.527 [2024-12-11 13:56:07.524407] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.786 [2024-12-11 13:56:07.587461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:14.786 [2024-12-11 13:56:07.712851] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:14.786 Malloc0 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:14.786 [2024-12-11 13:56:07.754682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=74518 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=74519 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=74520 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:14.786 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 74518 00:15:15.044 [2024-12-11 13:56:07.943024] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:15.044 [2024-12-11 13:56:07.953273] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:15.044 [2024-12-11 13:56:07.953557] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:15.980 Initializing NVMe Controllers 00:15:15.980 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:15.980 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:15:15.980 Initialization complete. Launching workers. 00:15:15.980 ======================================================== 00:15:15.980 Latency(us) 00:15:15.980 Device Information : IOPS MiB/s Average min max 00:15:15.980 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3307.00 12.92 302.02 147.93 660.83 00:15:15.980 ======================================================== 00:15:15.980 Total : 3307.00 12.92 302.02 147.93 660.83 00:15:15.980 00:15:15.980 Initializing NVMe Controllers 00:15:15.980 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:15.980 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:15:15.980 Initialization complete. Launching workers. 00:15:15.980 ======================================================== 00:15:15.980 Latency(us) 00:15:15.980 Device Information : IOPS MiB/s Average min max 00:15:15.980 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3349.00 13.08 298.21 171.01 714.42 00:15:15.980 ======================================================== 00:15:15.980 Total : 3349.00 13.08 298.21 171.01 714.42 00:15:15.980 00:15:15.980 Initializing NVMe Controllers 00:15:15.980 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:15.980 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:15:15.980 Initialization complete. Launching workers. 00:15:15.980 ======================================================== 00:15:15.980 Latency(us) 00:15:15.980 Device Information : IOPS MiB/s Average min max 00:15:15.980 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3347.00 13.07 298.43 191.10 415.12 00:15:15.980 ======================================================== 00:15:15.980 Total : 3347.00 13.07 298.43 191.10 415.12 00:15:15.980 00:15:15.980 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 74519 00:15:15.980 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 74520 00:15:15.980 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:15.980 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:15:15.980 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:15.980 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:15:16.239 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:16.239 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:15:16.239 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:16.239 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:16.239 rmmod nvme_tcp 00:15:16.239 rmmod nvme_fabrics 00:15:16.239 rmmod nvme_keyring 00:15:16.239 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:16.239 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:15:16.239 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:15:16.239 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 74495 ']' 00:15:16.239 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 74495 00:15:16.239 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 74495 ']' 00:15:16.239 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 74495 00:15:16.239 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:15:16.239 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.239 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74495 00:15:16.239 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:16.239 killing process with pid 74495 00:15:16.239 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:16.239 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74495' 00:15:16.239 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 74495 00:15:16.239 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 74495 00:15:16.498 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:16.498 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:16.498 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:16.498 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:15:16.498 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:15:16.498 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:16.498 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:15:16.498 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:16.498 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:16.498 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:16.498 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:16.498 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:16.498 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:16.498 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:16.498 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:16.498 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:16.498 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:16.498 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:16.498 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:16.498 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:16.498 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:16.757 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:16.757 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:16.757 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.757 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:16.757 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.757 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:15:16.757 00:15:16.757 real 0m3.069s 00:15:16.757 user 0m4.922s 00:15:16.757 sys 0m1.356s 00:15:16.757 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.757 ************************************ 00:15:16.757 END TEST nvmf_control_msg_list 00:15:16.757 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:16.757 ************************************ 00:15:16.757 13:56:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:16.757 13:56:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:16.757 13:56:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.757 13:56:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:16.757 ************************************ 00:15:16.757 START TEST nvmf_wait_for_buf 00:15:16.757 ************************************ 00:15:16.758 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:16.758 * Looking for test storage... 00:15:16.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:16.758 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:16.758 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:15:16.758 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:17.017 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:17.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.018 --rc genhtml_branch_coverage=1 00:15:17.018 --rc genhtml_function_coverage=1 00:15:17.018 --rc genhtml_legend=1 00:15:17.018 --rc geninfo_all_blocks=1 00:15:17.018 --rc geninfo_unexecuted_blocks=1 00:15:17.018 00:15:17.018 ' 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:17.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.018 --rc genhtml_branch_coverage=1 00:15:17.018 --rc genhtml_function_coverage=1 00:15:17.018 --rc genhtml_legend=1 00:15:17.018 --rc geninfo_all_blocks=1 00:15:17.018 --rc geninfo_unexecuted_blocks=1 00:15:17.018 00:15:17.018 ' 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:17.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.018 --rc genhtml_branch_coverage=1 00:15:17.018 --rc genhtml_function_coverage=1 00:15:17.018 --rc genhtml_legend=1 00:15:17.018 --rc geninfo_all_blocks=1 00:15:17.018 --rc geninfo_unexecuted_blocks=1 00:15:17.018 00:15:17.018 ' 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:17.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.018 --rc genhtml_branch_coverage=1 00:15:17.018 --rc genhtml_function_coverage=1 00:15:17.018 --rc genhtml_legend=1 00:15:17.018 --rc geninfo_all_blocks=1 00:15:17.018 --rc geninfo_unexecuted_blocks=1 00:15:17.018 00:15:17.018 ' 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:17.018 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:17.018 Cannot find device "nvmf_init_br" 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:17.018 Cannot find device "nvmf_init_br2" 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:15:17.018 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:17.019 Cannot find device "nvmf_tgt_br" 00:15:17.019 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:15:17.019 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:17.019 Cannot find device "nvmf_tgt_br2" 00:15:17.019 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:15:17.019 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:17.019 Cannot find device "nvmf_init_br" 00:15:17.019 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:15:17.019 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:17.019 Cannot find device "nvmf_init_br2" 00:15:17.019 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:15:17.019 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:17.019 Cannot find device "nvmf_tgt_br" 00:15:17.019 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:15:17.019 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:17.019 Cannot find device "nvmf_tgt_br2" 00:15:17.019 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:15:17.019 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:17.019 Cannot find device "nvmf_br" 00:15:17.019 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:15:17.019 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:17.019 Cannot find device "nvmf_init_if" 00:15:17.019 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:15:17.019 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:17.019 Cannot find device "nvmf_init_if2" 00:15:17.019 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:15:17.019 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:17.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.019 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:15:17.019 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:17.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.019 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:15:17.019 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:17.019 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:17.019 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:17.278 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:17.537 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:17.537 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.122 ms 00:15:17.537 00:15:17.537 --- 10.0.0.3 ping statistics --- 00:15:17.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.537 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:15:17.537 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:17.537 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:17.537 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:15:17.537 00:15:17.537 --- 10.0.0.4 ping statistics --- 00:15:17.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.537 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:17.537 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:17.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:15:17.537 00:15:17.537 --- 10.0.0.1 ping statistics --- 00:15:17.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.537 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:15:17.537 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:17.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:15:17.537 00:15:17.537 --- 10.0.0.2 ping statistics --- 00:15:17.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.537 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:17.537 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.537 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:15:17.538 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:17.538 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.538 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:17.538 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:17.538 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.538 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:17.538 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:17.538 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:15:17.538 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:17.538 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:17.538 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:17.538 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=74761 00:15:17.538 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:17.538 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 74761 00:15:17.538 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 74761 ']' 00:15:17.538 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.538 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.538 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.538 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.538 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:17.538 [2024-12-11 13:56:10.439083] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:15:17.538 [2024-12-11 13:56:10.439244] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.797 [2024-12-11 13:56:10.591071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.797 [2024-12-11 13:56:10.651644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.797 [2024-12-11 13:56:10.651734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.797 [2024-12-11 13:56:10.651764] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.797 [2024-12-11 13:56:10.651783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.797 [2024-12-11 13:56:10.651791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.797 [2024-12-11 13:56:10.652194] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.797 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.797 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:15:17.797 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:17.797 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:17.797 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:17.797 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.797 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:17.797 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:17.797 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:15:17.797 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.797 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:17.798 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.798 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:15:17.798 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.798 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:17.798 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.798 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:15:17.798 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.798 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:17.798 [2024-12-11 13:56:10.794513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:17.798 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.798 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:17.798 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.798 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:18.058 Malloc0 00:15:18.058 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.058 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:15:18.058 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.058 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:18.058 [2024-12-11 13:56:10.865415] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.058 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.058 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:15:18.058 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.058 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:18.058 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.058 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:18.058 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.058 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:18.058 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.058 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:18.058 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.058 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:18.058 [2024-12-11 13:56:10.893564] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:18.058 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.058 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:18.058 [2024-12-11 13:56:11.101869] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:19.432 Initializing NVMe Controllers 00:15:19.432 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:19.432 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:15:19.432 Initialization complete. Launching workers. 00:15:19.432 ======================================================== 00:15:19.432 Latency(us) 00:15:19.433 Device Information : IOPS MiB/s Average min max 00:15:19.433 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 504.00 63.00 7975.43 6269.05 9054.64 00:15:19.433 ======================================================== 00:15:19.433 Total : 504.00 63.00 7975.43 6269.05 9054.64 00:15:19.433 00:15:19.433 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:15:19.433 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.433 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:19.433 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:15:19.433 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.433 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4788 00:15:19.433 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4788 -eq 0 ]] 00:15:19.433 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:19.433 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:15:19.433 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:19.433 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:15:19.690 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:19.691 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:15:19.691 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:19.691 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:19.691 rmmod nvme_tcp 00:15:19.691 rmmod nvme_fabrics 00:15:19.691 rmmod nvme_keyring 00:15:19.691 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:19.691 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:15:19.691 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:15:19.691 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 74761 ']' 00:15:19.691 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 74761 00:15:19.691 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 74761 ']' 00:15:19.691 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 74761 00:15:19.691 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:15:19.691 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:19.691 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74761 00:15:19.691 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:19.691 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:19.691 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74761' 00:15:19.691 killing process with pid 74761 00:15:19.691 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 74761 00:15:19.691 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 74761 00:15:19.950 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:19.950 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:19.950 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:19.950 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:15:19.950 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:19.950 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:15:19.950 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:15:19.950 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:19.950 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:19.950 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:19.950 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:19.950 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:19.950 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:19.950 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:19.950 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:19.950 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:19.950 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:19.950 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:19.950 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:19.950 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:19.950 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:20.209 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:20.209 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:20.209 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.209 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:20.209 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.209 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:15:20.209 00:15:20.209 real 0m3.395s 00:15:20.209 user 0m2.620s 00:15:20.209 sys 0m0.882s 00:15:20.209 ************************************ 00:15:20.209 END TEST nvmf_wait_for_buf 00:15:20.209 ************************************ 00:15:20.209 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:20.209 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:20.209 13:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:15:20.209 13:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:15:20.209 13:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:15:20.209 13:56:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:20.209 13:56:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:20.209 13:56:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:20.209 ************************************ 00:15:20.209 START TEST nvmf_nsid 00:15:20.209 ************************************ 00:15:20.209 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:15:20.209 * Looking for test storage... 00:15:20.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:20.209 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:20.209 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:15:20.209 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:20.468 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:20.468 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:20.468 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:20.468 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:20.468 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:15:20.468 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:15:20.468 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:15:20.468 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:15:20.468 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:20.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.469 --rc genhtml_branch_coverage=1 00:15:20.469 --rc genhtml_function_coverage=1 00:15:20.469 --rc genhtml_legend=1 00:15:20.469 --rc geninfo_all_blocks=1 00:15:20.469 --rc geninfo_unexecuted_blocks=1 00:15:20.469 00:15:20.469 ' 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:20.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.469 --rc genhtml_branch_coverage=1 00:15:20.469 --rc genhtml_function_coverage=1 00:15:20.469 --rc genhtml_legend=1 00:15:20.469 --rc geninfo_all_blocks=1 00:15:20.469 --rc geninfo_unexecuted_blocks=1 00:15:20.469 00:15:20.469 ' 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:20.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.469 --rc genhtml_branch_coverage=1 00:15:20.469 --rc genhtml_function_coverage=1 00:15:20.469 --rc genhtml_legend=1 00:15:20.469 --rc geninfo_all_blocks=1 00:15:20.469 --rc geninfo_unexecuted_blocks=1 00:15:20.469 00:15:20.469 ' 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:20.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.469 --rc genhtml_branch_coverage=1 00:15:20.469 --rc genhtml_function_coverage=1 00:15:20.469 --rc genhtml_legend=1 00:15:20.469 --rc geninfo_all_blocks=1 00:15:20.469 --rc geninfo_unexecuted_blocks=1 00:15:20.469 00:15:20.469 ' 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:20.469 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:15:20.469 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:20.470 Cannot find device "nvmf_init_br" 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:20.470 Cannot find device "nvmf_init_br2" 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:20.470 Cannot find device "nvmf_tgt_br" 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:20.470 Cannot find device "nvmf_tgt_br2" 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:20.470 Cannot find device "nvmf_init_br" 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:20.470 Cannot find device "nvmf_init_br2" 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:20.470 Cannot find device "nvmf_tgt_br" 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:20.470 Cannot find device "nvmf_tgt_br2" 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:20.470 Cannot find device "nvmf_br" 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:20.470 Cannot find device "nvmf_init_if" 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:20.470 Cannot find device "nvmf_init_if2" 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:20.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:20.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:20.470 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:20.729 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:20.729 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:15:20.729 00:15:20.729 --- 10.0.0.3 ping statistics --- 00:15:20.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.729 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:20.729 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:20.729 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:15:20.729 00:15:20.729 --- 10.0.0.4 ping statistics --- 00:15:20.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.729 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:20.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:20.729 00:15:20.729 --- 10.0.0.1 ping statistics --- 00:15:20.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.729 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:20.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:15:20.729 00:15:20.729 --- 10.0.0.2 ping statistics --- 00:15:20.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.729 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:20.729 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:20.988 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:15:20.988 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:20.988 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:20.988 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:20.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.988 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=75021 00:15:20.988 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:15:20.988 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 75021 00:15:20.988 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 75021 ']' 00:15:20.988 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.988 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:20.988 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.988 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:20.988 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:20.988 [2024-12-11 13:56:13.850913] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:15:20.988 [2024-12-11 13:56:13.851277] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.988 [2024-12-11 13:56:14.011573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.247 [2024-12-11 13:56:14.084559] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.247 [2024-12-11 13:56:14.084873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.247 [2024-12-11 13:56:14.084913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.247 [2024-12-11 13:56:14.084923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.247 [2024-12-11 13:56:14.084932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.247 [2024-12-11 13:56:14.085399] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.247 [2024-12-11 13:56:14.150736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=75053 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=0dc02832-8a0e-405a-b8dd-d52e036333fd 00:15:22.183 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:15:22.184 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=afc5d817-bc9f-4803-8f30-b3d868371f9c 00:15:22.184 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:15:22.184 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=f9eb9af5-384f-4503-a24d-38dbc44eed24 00:15:22.184 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:15:22.184 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.184 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:22.184 null0 00:15:22.184 null1 00:15:22.184 null2 00:15:22.184 [2024-12-11 13:56:14.957639] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.184 [2024-12-11 13:56:14.975244] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:15:22.184 [2024-12-11 13:56:14.975324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75053 ] 00:15:22.184 [2024-12-11 13:56:14.981860] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:22.184 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.184 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 75053 /var/tmp/tgt2.sock 00:15:22.184 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 75053 ']' 00:15:22.184 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:15:22.184 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.184 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:15:22.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:15:22.184 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.184 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:22.184 [2024-12-11 13:56:15.128649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.184 [2024-12-11 13:56:15.195125] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.442 [2024-12-11 13:56:15.273439] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:22.700 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.700 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:15:22.700 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:15:22.959 [2024-12-11 13:56:15.915155] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.959 [2024-12-11 13:56:15.931246] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:15:22.959 nvme0n1 nvme0n2 00:15:22.959 nvme1n1 00:15:22.959 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:15:22.959 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:15:22.959 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:15:23.217 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:15:23.217 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:15:23.217 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:15:23.217 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:15:23.217 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:15:23.217 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:15:23.217 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:15:23.217 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:15:23.217 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:23.217 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:23.217 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:15:23.217 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:15:23.217 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 0dc02832-8a0e-405a-b8dd-d52e036333fd 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0dc028328a0e405ab8ddd52e036333fd 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0DC028328A0E405AB8DDD52E036333FD 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 0DC028328A0E405AB8DDD52E036333FD == \0\D\C\0\2\8\3\2\8\A\0\E\4\0\5\A\B\8\D\D\D\5\2\E\0\3\6\3\3\3\F\D ]] 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid afc5d817-bc9f-4803-8f30-b3d868371f9c 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:15:24.207 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:15:24.466 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=afc5d817bc9f48038f30b3d868371f9c 00:15:24.466 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo AFC5D817BC9F48038F30B3D868371F9C 00:15:24.466 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ AFC5D817BC9F48038F30B3D868371F9C == \A\F\C\5\D\8\1\7\B\C\9\F\4\8\0\3\8\F\3\0\B\3\D\8\6\8\3\7\1\F\9\C ]] 00:15:24.466 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:15:24.466 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:15:24.466 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:24.466 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:15:24.466 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:24.466 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:15:24.466 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:15:24.466 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid f9eb9af5-384f-4503-a24d-38dbc44eed24 00:15:24.466 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:15:24.466 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:15:24.466 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:15:24.466 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:15:24.466 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:15:24.466 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f9eb9af5384f4503a24d38dbc44eed24 00:15:24.466 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F9EB9AF5384F4503A24D38DBC44EED24 00:15:24.466 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ F9EB9AF5384F4503A24D38DBC44EED24 == \F\9\E\B\9\A\F\5\3\8\4\F\4\5\0\3\A\2\4\D\3\8\D\B\C\4\4\E\E\D\2\4 ]] 00:15:24.466 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:15:24.725 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:15:24.725 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:15:24.725 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 75053 00:15:24.725 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 75053 ']' 00:15:24.725 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 75053 00:15:24.725 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:15:24.725 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:24.725 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75053 00:15:24.725 killing process with pid 75053 00:15:24.725 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:24.725 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:24.725 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75053' 00:15:24.725 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 75053 00:15:24.725 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 75053 00:15:24.983 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:15:24.983 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:24.983 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:15:25.242 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:25.242 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:15:25.242 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:25.242 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:25.242 rmmod nvme_tcp 00:15:25.242 rmmod nvme_fabrics 00:15:25.242 rmmod nvme_keyring 00:15:25.242 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:25.242 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:15:25.242 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:15:25.242 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 75021 ']' 00:15:25.242 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 75021 00:15:25.242 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 75021 ']' 00:15:25.242 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 75021 00:15:25.242 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:15:25.242 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:25.242 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75021 00:15:25.242 killing process with pid 75021 00:15:25.242 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:25.242 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:25.242 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75021' 00:15:25.242 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 75021 00:15:25.242 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 75021 00:15:25.501 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:25.501 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:25.501 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:25.501 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:15:25.501 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:15:25.501 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:25.501 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:15:25.501 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:25.501 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:25.501 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:25.501 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:25.501 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:25.501 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:25.501 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:25.501 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:25.501 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:25.501 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:25.501 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:25.501 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:25.501 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:25.501 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:25.760 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:25.760 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:25.760 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.760 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:25.760 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.760 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:15:25.760 ************************************ 00:15:25.760 END TEST nvmf_nsid 00:15:25.760 ************************************ 00:15:25.760 00:15:25.760 real 0m5.503s 00:15:25.760 user 0m7.949s 00:15:25.760 sys 0m1.766s 00:15:25.760 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:25.760 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:25.760 13:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:25.760 ************************************ 00:15:25.760 END TEST nvmf_target_extra 00:15:25.760 ************************************ 00:15:25.760 00:15:25.760 real 5m12.688s 00:15:25.760 user 10m55.351s 00:15:25.760 sys 1m9.064s 00:15:25.760 13:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:25.760 13:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:25.760 13:56:18 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:25.760 13:56:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:25.760 13:56:18 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:25.760 13:56:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:25.760 ************************************ 00:15:25.760 START TEST nvmf_host 00:15:25.760 ************************************ 00:15:25.760 13:56:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:25.760 * Looking for test storage... 00:15:26.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:15:26.019 13:56:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:26.019 13:56:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:15:26.019 13:56:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:26.019 13:56:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:26.019 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:26.019 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:26.019 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:26.019 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.019 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:26.019 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:26.019 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:26.019 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:26.019 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:26.019 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:26.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.020 --rc genhtml_branch_coverage=1 00:15:26.020 --rc genhtml_function_coverage=1 00:15:26.020 --rc genhtml_legend=1 00:15:26.020 --rc geninfo_all_blocks=1 00:15:26.020 --rc geninfo_unexecuted_blocks=1 00:15:26.020 00:15:26.020 ' 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:26.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.020 --rc genhtml_branch_coverage=1 00:15:26.020 --rc genhtml_function_coverage=1 00:15:26.020 --rc genhtml_legend=1 00:15:26.020 --rc geninfo_all_blocks=1 00:15:26.020 --rc geninfo_unexecuted_blocks=1 00:15:26.020 00:15:26.020 ' 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:26.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.020 --rc genhtml_branch_coverage=1 00:15:26.020 --rc genhtml_function_coverage=1 00:15:26.020 --rc genhtml_legend=1 00:15:26.020 --rc geninfo_all_blocks=1 00:15:26.020 --rc geninfo_unexecuted_blocks=1 00:15:26.020 00:15:26.020 ' 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:26.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.020 --rc genhtml_branch_coverage=1 00:15:26.020 --rc genhtml_function_coverage=1 00:15:26.020 --rc genhtml_legend=1 00:15:26.020 --rc geninfo_all_blocks=1 00:15:26.020 --rc geninfo_unexecuted_blocks=1 00:15:26.020 00:15:26.020 ' 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:26.020 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:26.020 13:56:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.021 ************************************ 00:15:26.021 START TEST nvmf_identify 00:15:26.021 ************************************ 00:15:26.021 13:56:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:26.280 * Looking for test storage... 00:15:26.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:26.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.281 --rc genhtml_branch_coverage=1 00:15:26.281 --rc genhtml_function_coverage=1 00:15:26.281 --rc genhtml_legend=1 00:15:26.281 --rc geninfo_all_blocks=1 00:15:26.281 --rc geninfo_unexecuted_blocks=1 00:15:26.281 00:15:26.281 ' 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:26.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.281 --rc genhtml_branch_coverage=1 00:15:26.281 --rc genhtml_function_coverage=1 00:15:26.281 --rc genhtml_legend=1 00:15:26.281 --rc geninfo_all_blocks=1 00:15:26.281 --rc geninfo_unexecuted_blocks=1 00:15:26.281 00:15:26.281 ' 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:26.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.281 --rc genhtml_branch_coverage=1 00:15:26.281 --rc genhtml_function_coverage=1 00:15:26.281 --rc genhtml_legend=1 00:15:26.281 --rc geninfo_all_blocks=1 00:15:26.281 --rc geninfo_unexecuted_blocks=1 00:15:26.281 00:15:26.281 ' 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:26.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.281 --rc genhtml_branch_coverage=1 00:15:26.281 --rc genhtml_function_coverage=1 00:15:26.281 --rc genhtml_legend=1 00:15:26.281 --rc geninfo_all_blocks=1 00:15:26.281 --rc geninfo_unexecuted_blocks=1 00:15:26.281 00:15:26.281 ' 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:26.281 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.281 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:26.282 Cannot find device "nvmf_init_br" 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:26.282 Cannot find device "nvmf_init_br2" 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:26.282 Cannot find device "nvmf_tgt_br" 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:26.282 Cannot find device "nvmf_tgt_br2" 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:26.282 Cannot find device "nvmf_init_br" 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:26.282 Cannot find device "nvmf_init_br2" 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:26.282 Cannot find device "nvmf_tgt_br" 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:26.282 Cannot find device "nvmf_tgt_br2" 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:26.282 Cannot find device "nvmf_br" 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:15:26.282 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:26.541 Cannot find device "nvmf_init_if" 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:26.541 Cannot find device "nvmf_init_if2" 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:26.541 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:26.541 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:26.541 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:26.801 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:26.801 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:15:26.801 00:15:26.801 --- 10.0.0.3 ping statistics --- 00:15:26.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.801 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:26.801 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:26.801 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.127 ms 00:15:26.801 00:15:26.801 --- 10.0.0.4 ping statistics --- 00:15:26.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.801 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:26.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:15:26.801 00:15:26.801 --- 10.0.0.1 ping statistics --- 00:15:26.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.801 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:26.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:15:26.801 00:15:26.801 --- 10.0.0.2 ping statistics --- 00:15:26.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.801 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=75419 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 75419 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 75419 ']' 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:26.801 13:56:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:26.801 [2024-12-11 13:56:19.737363] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:15:26.801 [2024-12-11 13:56:19.737678] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.060 [2024-12-11 13:56:19.882279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:27.060 [2024-12-11 13:56:19.942372] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.060 [2024-12-11 13:56:19.942611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.060 [2024-12-11 13:56:19.942687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:27.060 [2024-12-11 13:56:19.942871] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:27.060 [2024-12-11 13:56:19.942999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.060 [2024-12-11 13:56:19.944449] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.060 [2024-12-11 13:56:19.944524] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.060 [2024-12-11 13:56:19.944661] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:27.060 [2024-12-11 13:56:19.944664] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.060 [2024-12-11 13:56:20.002578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:27.060 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:27.060 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:15:27.060 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:27.060 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.060 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.060 [2024-12-11 13:56:20.079827] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.060 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.060 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:27.060 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:27.060 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.319 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:27.319 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.319 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.319 Malloc0 00:15:27.319 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.319 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:27.319 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.319 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.319 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.319 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:27.319 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.320 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.320 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.320 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:27.320 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.320 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.320 [2024-12-11 13:56:20.210780] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:27.320 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.320 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:27.320 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.320 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.320 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.320 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:27.320 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.320 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.320 [ 00:15:27.320 { 00:15:27.320 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:27.320 "subtype": "Discovery", 00:15:27.320 "listen_addresses": [ 00:15:27.320 { 00:15:27.320 "trtype": "TCP", 00:15:27.320 "adrfam": "IPv4", 00:15:27.320 "traddr": "10.0.0.3", 00:15:27.320 "trsvcid": "4420" 00:15:27.320 } 00:15:27.320 ], 00:15:27.320 "allow_any_host": true, 00:15:27.320 "hosts": [] 00:15:27.320 }, 00:15:27.320 { 00:15:27.320 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.320 "subtype": "NVMe", 00:15:27.320 "listen_addresses": [ 00:15:27.320 { 00:15:27.320 "trtype": "TCP", 00:15:27.320 "adrfam": "IPv4", 00:15:27.320 "traddr": "10.0.0.3", 00:15:27.320 "trsvcid": "4420" 00:15:27.320 } 00:15:27.320 ], 00:15:27.320 "allow_any_host": true, 00:15:27.320 "hosts": [], 00:15:27.320 "serial_number": "SPDK00000000000001", 00:15:27.320 "model_number": "SPDK bdev Controller", 00:15:27.320 "max_namespaces": 32, 00:15:27.320 "min_cntlid": 1, 00:15:27.320 "max_cntlid": 65519, 00:15:27.320 "namespaces": [ 00:15:27.320 { 00:15:27.320 "nsid": 1, 00:15:27.320 "bdev_name": "Malloc0", 00:15:27.320 "name": "Malloc0", 00:15:27.320 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:27.320 "eui64": "ABCDEF0123456789", 00:15:27.320 "uuid": "726c0b82-07c6-4402-bba8-3e2d4b4f4f35" 00:15:27.320 } 00:15:27.320 ] 00:15:27.320 } 00:15:27.320 ] 00:15:27.320 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.320 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:27.320 [2024-12-11 13:56:20.264462] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:15:27.320 [2024-12-11 13:56:20.264640] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75441 ] 00:15:27.582 [2024-12-11 13:56:20.420772] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:15:27.582 [2024-12-11 13:56:20.420885] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:27.582 [2024-12-11 13:56:20.420893] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:27.582 [2024-12-11 13:56:20.420909] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:27.582 [2024-12-11 13:56:20.420934] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:27.582 [2024-12-11 13:56:20.421317] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:15:27.582 [2024-12-11 13:56:20.421385] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x56d750 0 00:15:27.582 [2024-12-11 13:56:20.426790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:27.582 [2024-12-11 13:56:20.426821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:27.582 [2024-12-11 13:56:20.426846] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:27.582 [2024-12-11 13:56:20.426850] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:27.582 [2024-12-11 13:56:20.426893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.582 [2024-12-11 13:56:20.426901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.582 [2024-12-11 13:56:20.426905] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x56d750) 00:15:27.582 [2024-12-11 13:56:20.426921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:27.582 [2024-12-11 13:56:20.426955] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1740, cid 0, qid 0 00:15:27.582 [2024-12-11 13:56:20.433799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.582 [2024-12-11 13:56:20.433835] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.582 [2024-12-11 13:56:20.433841] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.582 [2024-12-11 13:56:20.433864] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1740) on tqpair=0x56d750 00:15:27.582 [2024-12-11 13:56:20.433879] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:27.582 [2024-12-11 13:56:20.433888] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:15:27.582 [2024-12-11 13:56:20.433894] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:15:27.582 [2024-12-11 13:56:20.433916] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.582 [2024-12-11 13:56:20.433922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.582 [2024-12-11 13:56:20.433926] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x56d750) 00:15:27.582 [2024-12-11 13:56:20.433935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.582 [2024-12-11 13:56:20.433970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1740, cid 0, qid 0 00:15:27.582 [2024-12-11 13:56:20.434052] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.582 [2024-12-11 13:56:20.434060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.582 [2024-12-11 13:56:20.434064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.582 [2024-12-11 13:56:20.434068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1740) on tqpair=0x56d750 00:15:27.583 [2024-12-11 13:56:20.434079] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:15:27.583 [2024-12-11 13:56:20.434088] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:15:27.583 [2024-12-11 13:56:20.434096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.434100] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.434105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x56d750) 00:15:27.583 [2024-12-11 13:56:20.434113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.583 [2024-12-11 13:56:20.434133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1740, cid 0, qid 0 00:15:27.583 [2024-12-11 13:56:20.434177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.583 [2024-12-11 13:56:20.434184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.583 [2024-12-11 13:56:20.434188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.434192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1740) on tqpair=0x56d750 00:15:27.583 [2024-12-11 13:56:20.434199] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:15:27.583 [2024-12-11 13:56:20.434207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:15:27.583 [2024-12-11 13:56:20.434226] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.434230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.434235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x56d750) 00:15:27.583 [2024-12-11 13:56:20.434242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.583 [2024-12-11 13:56:20.434260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1740, cid 0, qid 0 00:15:27.583 [2024-12-11 13:56:20.434308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.583 [2024-12-11 13:56:20.434315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.583 [2024-12-11 13:56:20.434319] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.434323] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1740) on tqpair=0x56d750 00:15:27.583 [2024-12-11 13:56:20.434329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:27.583 [2024-12-11 13:56:20.434340] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.434349] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.434354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x56d750) 00:15:27.583 [2024-12-11 13:56:20.434361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.583 [2024-12-11 13:56:20.434379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1740, cid 0, qid 0 00:15:27.583 [2024-12-11 13:56:20.434423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.583 [2024-12-11 13:56:20.434430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.583 [2024-12-11 13:56:20.434434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.434438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1740) on tqpair=0x56d750 00:15:27.583 [2024-12-11 13:56:20.434444] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:15:27.583 [2024-12-11 13:56:20.434449] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:15:27.583 [2024-12-11 13:56:20.434457] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:27.583 [2024-12-11 13:56:20.434569] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:15:27.583 [2024-12-11 13:56:20.434575] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:27.583 [2024-12-11 13:56:20.434585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.434590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.434594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x56d750) 00:15:27.583 [2024-12-11 13:56:20.434601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.583 [2024-12-11 13:56:20.434621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1740, cid 0, qid 0 00:15:27.583 [2024-12-11 13:56:20.434668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.583 [2024-12-11 13:56:20.434685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.583 [2024-12-11 13:56:20.434690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.434695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1740) on tqpair=0x56d750 00:15:27.583 [2024-12-11 13:56:20.434714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:27.583 [2024-12-11 13:56:20.434727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.434732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.434737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x56d750) 00:15:27.583 [2024-12-11 13:56:20.434745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.583 [2024-12-11 13:56:20.434766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1740, cid 0, qid 0 00:15:27.583 [2024-12-11 13:56:20.434813] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.583 [2024-12-11 13:56:20.434820] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.583 [2024-12-11 13:56:20.434824] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.434829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1740) on tqpair=0x56d750 00:15:27.583 [2024-12-11 13:56:20.434834] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:27.583 [2024-12-11 13:56:20.434840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:15:27.583 [2024-12-11 13:56:20.434848] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:15:27.583 [2024-12-11 13:56:20.434860] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:15:27.583 [2024-12-11 13:56:20.434871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.434876] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x56d750) 00:15:27.583 [2024-12-11 13:56:20.434884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.583 [2024-12-11 13:56:20.434904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1740, cid 0, qid 0 00:15:27.583 [2024-12-11 13:56:20.434999] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.583 [2024-12-11 13:56:20.435011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.583 [2024-12-11 13:56:20.435016] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.435020] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x56d750): datao=0, datal=4096, cccid=0 00:15:27.583 [2024-12-11 13:56:20.435026] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5d1740) on tqpair(0x56d750): expected_datao=0, payload_size=4096 00:15:27.583 [2024-12-11 13:56:20.435031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.435040] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.435044] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.435054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.583 [2024-12-11 13:56:20.435060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.583 [2024-12-11 13:56:20.435064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.435068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1740) on tqpair=0x56d750 00:15:27.583 [2024-12-11 13:56:20.435078] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:15:27.583 [2024-12-11 13:56:20.435084] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:15:27.583 [2024-12-11 13:56:20.435088] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:15:27.583 [2024-12-11 13:56:20.435094] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:15:27.583 [2024-12-11 13:56:20.435122] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:15:27.583 [2024-12-11 13:56:20.435127] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:15:27.583 [2024-12-11 13:56:20.435137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:15:27.583 [2024-12-11 13:56:20.435145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.435150] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.435154] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x56d750) 00:15:27.583 [2024-12-11 13:56:20.435163] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:27.583 [2024-12-11 13:56:20.435184] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1740, cid 0, qid 0 00:15:27.583 [2024-12-11 13:56:20.435241] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.583 [2024-12-11 13:56:20.435254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.583 [2024-12-11 13:56:20.435258] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.435263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1740) on tqpair=0x56d750 00:15:27.583 [2024-12-11 13:56:20.435272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.435276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.435280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x56d750) 00:15:27.583 [2024-12-11 13:56:20.435288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.583 [2024-12-11 13:56:20.435295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.435299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.583 [2024-12-11 13:56:20.435303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x56d750) 00:15:27.584 [2024-12-11 13:56:20.435309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.584 [2024-12-11 13:56:20.435315] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.435320] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.435324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x56d750) 00:15:27.584 [2024-12-11 13:56:20.435330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.584 [2024-12-11 13:56:20.435336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.435340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.435344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.584 [2024-12-11 13:56:20.435350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.584 [2024-12-11 13:56:20.435355] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:27.584 [2024-12-11 13:56:20.435370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:27.584 [2024-12-11 13:56:20.435379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.435383] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x56d750) 00:15:27.584 [2024-12-11 13:56:20.435391] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.584 [2024-12-11 13:56:20.435413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1740, cid 0, qid 0 00:15:27.584 [2024-12-11 13:56:20.435420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d18c0, cid 1, qid 0 00:15:27.584 [2024-12-11 13:56:20.435425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1a40, cid 2, qid 0 00:15:27.584 [2024-12-11 13:56:20.435430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.584 [2024-12-11 13:56:20.435435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1d40, cid 4, qid 0 00:15:27.584 [2024-12-11 13:56:20.435520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.584 [2024-12-11 13:56:20.435527] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.584 [2024-12-11 13:56:20.435531] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.435535] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1d40) on tqpair=0x56d750 00:15:27.584 [2024-12-11 13:56:20.435542] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:15:27.584 [2024-12-11 13:56:20.435547] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:15:27.584 [2024-12-11 13:56:20.435559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.435565] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x56d750) 00:15:27.584 [2024-12-11 13:56:20.435572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.584 [2024-12-11 13:56:20.435591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1d40, cid 4, qid 0 00:15:27.584 [2024-12-11 13:56:20.435650] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.584 [2024-12-11 13:56:20.435666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.584 [2024-12-11 13:56:20.435671] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.435676] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x56d750): datao=0, datal=4096, cccid=4 00:15:27.584 [2024-12-11 13:56:20.435681] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5d1d40) on tqpair(0x56d750): expected_datao=0, payload_size=4096 00:15:27.584 [2024-12-11 13:56:20.435686] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.435694] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.435709] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.435720] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.584 [2024-12-11 13:56:20.435727] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.584 [2024-12-11 13:56:20.435731] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.435735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1d40) on tqpair=0x56d750 00:15:27.584 [2024-12-11 13:56:20.435750] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:15:27.584 [2024-12-11 13:56:20.435781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.435787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x56d750) 00:15:27.584 [2024-12-11 13:56:20.435795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.584 [2024-12-11 13:56:20.435804] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.435808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.435812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x56d750) 00:15:27.584 [2024-12-11 13:56:20.435819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.584 [2024-12-11 13:56:20.435846] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1d40, cid 4, qid 0 00:15:27.584 [2024-12-11 13:56:20.435854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1ec0, cid 5, qid 0 00:15:27.584 [2024-12-11 13:56:20.435957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.584 [2024-12-11 13:56:20.435964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.584 [2024-12-11 13:56:20.435968] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.435972] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x56d750): datao=0, datal=1024, cccid=4 00:15:27.584 [2024-12-11 13:56:20.435977] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5d1d40) on tqpair(0x56d750): expected_datao=0, payload_size=1024 00:15:27.584 [2024-12-11 13:56:20.435982] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.435989] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.435993] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.435999] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.584 [2024-12-11 13:56:20.436006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.584 [2024-12-11 13:56:20.436009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.436014] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1ec0) on tqpair=0x56d750 00:15:27.584 [2024-12-11 13:56:20.436032] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.584 [2024-12-11 13:56:20.436040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.584 [2024-12-11 13:56:20.436044] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.436048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1d40) on tqpair=0x56d750 00:15:27.584 [2024-12-11 13:56:20.436061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.436066] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x56d750) 00:15:27.584 [2024-12-11 13:56:20.436074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.584 [2024-12-11 13:56:20.436099] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1d40, cid 4, qid 0 00:15:27.584 [2024-12-11 13:56:20.436163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.584 [2024-12-11 13:56:20.436170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.584 [2024-12-11 13:56:20.436174] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.436178] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x56d750): datao=0, datal=3072, cccid=4 00:15:27.584 [2024-12-11 13:56:20.436183] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5d1d40) on tqpair(0x56d750): expected_datao=0, payload_size=3072 00:15:27.584 [2024-12-11 13:56:20.436188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.436195] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.436199] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.436208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.584 [2024-12-11 13:56:20.436214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.584 [2024-12-11 13:56:20.436218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.436222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1d40) on tqpair=0x56d750 00:15:27.584 [2024-12-11 13:56:20.436233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.436238] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x56d750) 00:15:27.584 [2024-12-11 13:56:20.436246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.584 [2024-12-11 13:56:20.436270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1d40, cid 4, qid 0 00:15:27.584 [2024-12-11 13:56:20.436330] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.584 [2024-12-11 13:56:20.436337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.584 [2024-12-11 13:56:20.436341] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.436345] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x56d750): datao=0, datal=8, cccid=4 00:15:27.584 [2024-12-11 13:56:20.436350] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5d1d40) on tqpair(0x56d750): expected_datao=0, payload_size=8 00:15:27.584 [2024-12-11 13:56:20.436355] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.436362] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.436366] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.436381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.584 [2024-12-11 13:56:20.436389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.584 [2024-12-11 13:56:20.436393] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.584 [2024-12-11 13:56:20.436397] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1d40) on tqpair=0x56d750 00:15:27.584 ===================================================== 00:15:27.584 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:27.584 ===================================================== 00:15:27.584 Controller Capabilities/Features 00:15:27.584 ================================ 00:15:27.584 Vendor ID: 0000 00:15:27.584 Subsystem Vendor ID: 0000 00:15:27.584 Serial Number: .................... 00:15:27.584 Model Number: ........................................ 00:15:27.584 Firmware Version: 25.01 00:15:27.584 Recommended Arb Burst: 0 00:15:27.584 IEEE OUI Identifier: 00 00 00 00:15:27.584 Multi-path I/O 00:15:27.585 May have multiple subsystem ports: No 00:15:27.585 May have multiple controllers: No 00:15:27.585 Associated with SR-IOV VF: No 00:15:27.585 Max Data Transfer Size: 131072 00:15:27.585 Max Number of Namespaces: 0 00:15:27.585 Max Number of I/O Queues: 1024 00:15:27.585 NVMe Specification Version (VS): 1.3 00:15:27.585 NVMe Specification Version (Identify): 1.3 00:15:27.585 Maximum Queue Entries: 128 00:15:27.585 Contiguous Queues Required: Yes 00:15:27.585 Arbitration Mechanisms Supported 00:15:27.585 Weighted Round Robin: Not Supported 00:15:27.585 Vendor Specific: Not Supported 00:15:27.585 Reset Timeout: 15000 ms 00:15:27.585 Doorbell Stride: 4 bytes 00:15:27.585 NVM Subsystem Reset: Not Supported 00:15:27.585 Command Sets Supported 00:15:27.585 NVM Command Set: Supported 00:15:27.585 Boot Partition: Not Supported 00:15:27.585 Memory Page Size Minimum: 4096 bytes 00:15:27.585 Memory Page Size Maximum: 4096 bytes 00:15:27.585 Persistent Memory Region: Not Supported 00:15:27.585 Optional Asynchronous Events Supported 00:15:27.585 Namespace Attribute Notices: Not Supported 00:15:27.585 Firmware Activation Notices: Not Supported 00:15:27.585 ANA Change Notices: Not Supported 00:15:27.585 PLE Aggregate Log Change Notices: Not Supported 00:15:27.585 LBA Status Info Alert Notices: Not Supported 00:15:27.585 EGE Aggregate Log Change Notices: Not Supported 00:15:27.585 Normal NVM Subsystem Shutdown event: Not Supported 00:15:27.585 Zone Descriptor Change Notices: Not Supported 00:15:27.585 Discovery Log Change Notices: Supported 00:15:27.585 Controller Attributes 00:15:27.585 128-bit Host Identifier: Not Supported 00:15:27.585 Non-Operational Permissive Mode: Not Supported 00:15:27.585 NVM Sets: Not Supported 00:15:27.585 Read Recovery Levels: Not Supported 00:15:27.585 Endurance Groups: Not Supported 00:15:27.585 Predictable Latency Mode: Not Supported 00:15:27.585 Traffic Based Keep ALive: Not Supported 00:15:27.585 Namespace Granularity: Not Supported 00:15:27.585 SQ Associations: Not Supported 00:15:27.585 UUID List: Not Supported 00:15:27.585 Multi-Domain Subsystem: Not Supported 00:15:27.585 Fixed Capacity Management: Not Supported 00:15:27.585 Variable Capacity Management: Not Supported 00:15:27.585 Delete Endurance Group: Not Supported 00:15:27.585 Delete NVM Set: Not Supported 00:15:27.585 Extended LBA Formats Supported: Not Supported 00:15:27.585 Flexible Data Placement Supported: Not Supported 00:15:27.585 00:15:27.585 Controller Memory Buffer Support 00:15:27.585 ================================ 00:15:27.585 Supported: No 00:15:27.585 00:15:27.585 Persistent Memory Region Support 00:15:27.585 ================================ 00:15:27.585 Supported: No 00:15:27.585 00:15:27.585 Admin Command Set Attributes 00:15:27.585 ============================ 00:15:27.585 Security Send/Receive: Not Supported 00:15:27.585 Format NVM: Not Supported 00:15:27.585 Firmware Activate/Download: Not Supported 00:15:27.585 Namespace Management: Not Supported 00:15:27.585 Device Self-Test: Not Supported 00:15:27.585 Directives: Not Supported 00:15:27.585 NVMe-MI: Not Supported 00:15:27.585 Virtualization Management: Not Supported 00:15:27.585 Doorbell Buffer Config: Not Supported 00:15:27.585 Get LBA Status Capability: Not Supported 00:15:27.585 Command & Feature Lockdown Capability: Not Supported 00:15:27.585 Abort Command Limit: 1 00:15:27.585 Async Event Request Limit: 4 00:15:27.585 Number of Firmware Slots: N/A 00:15:27.585 Firmware Slot 1 Read-Only: N/A 00:15:27.585 Firmware Activation Without Reset: N/A 00:15:27.585 Multiple Update Detection Support: N/A 00:15:27.585 Firmware Update Granularity: No Information Provided 00:15:27.585 Per-Namespace SMART Log: No 00:15:27.585 Asymmetric Namespace Access Log Page: Not Supported 00:15:27.585 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:27.585 Command Effects Log Page: Not Supported 00:15:27.585 Get Log Page Extended Data: Supported 00:15:27.585 Telemetry Log Pages: Not Supported 00:15:27.585 Persistent Event Log Pages: Not Supported 00:15:27.585 Supported Log Pages Log Page: May Support 00:15:27.585 Commands Supported & Effects Log Page: Not Supported 00:15:27.585 Feature Identifiers & Effects Log Page:May Support 00:15:27.585 NVMe-MI Commands & Effects Log Page: May Support 00:15:27.585 Data Area 4 for Telemetry Log: Not Supported 00:15:27.585 Error Log Page Entries Supported: 128 00:15:27.585 Keep Alive: Not Supported 00:15:27.585 00:15:27.585 NVM Command Set Attributes 00:15:27.585 ========================== 00:15:27.585 Submission Queue Entry Size 00:15:27.585 Max: 1 00:15:27.585 Min: 1 00:15:27.585 Completion Queue Entry Size 00:15:27.585 Max: 1 00:15:27.585 Min: 1 00:15:27.585 Number of Namespaces: 0 00:15:27.585 Compare Command: Not Supported 00:15:27.585 Write Uncorrectable Command: Not Supported 00:15:27.585 Dataset Management Command: Not Supported 00:15:27.585 Write Zeroes Command: Not Supported 00:15:27.585 Set Features Save Field: Not Supported 00:15:27.585 Reservations: Not Supported 00:15:27.585 Timestamp: Not Supported 00:15:27.585 Copy: Not Supported 00:15:27.585 Volatile Write Cache: Not Present 00:15:27.585 Atomic Write Unit (Normal): 1 00:15:27.585 Atomic Write Unit (PFail): 1 00:15:27.585 Atomic Compare & Write Unit: 1 00:15:27.585 Fused Compare & Write: Supported 00:15:27.585 Scatter-Gather List 00:15:27.585 SGL Command Set: Supported 00:15:27.585 SGL Keyed: Supported 00:15:27.585 SGL Bit Bucket Descriptor: Not Supported 00:15:27.585 SGL Metadata Pointer: Not Supported 00:15:27.585 Oversized SGL: Not Supported 00:15:27.585 SGL Metadata Address: Not Supported 00:15:27.585 SGL Offset: Supported 00:15:27.585 Transport SGL Data Block: Not Supported 00:15:27.585 Replay Protected Memory Block: Not Supported 00:15:27.585 00:15:27.585 Firmware Slot Information 00:15:27.585 ========================= 00:15:27.585 Active slot: 0 00:15:27.585 00:15:27.585 00:15:27.585 Error Log 00:15:27.585 ========= 00:15:27.585 00:15:27.585 Active Namespaces 00:15:27.585 ================= 00:15:27.585 Discovery Log Page 00:15:27.585 ================== 00:15:27.585 Generation Counter: 2 00:15:27.585 Number of Records: 2 00:15:27.585 Record Format: 0 00:15:27.585 00:15:27.585 Discovery Log Entry 0 00:15:27.585 ---------------------- 00:15:27.585 Transport Type: 3 (TCP) 00:15:27.585 Address Family: 1 (IPv4) 00:15:27.585 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:27.585 Entry Flags: 00:15:27.585 Duplicate Returned Information: 1 00:15:27.585 Explicit Persistent Connection Support for Discovery: 1 00:15:27.585 Transport Requirements: 00:15:27.585 Secure Channel: Not Required 00:15:27.585 Port ID: 0 (0x0000) 00:15:27.585 Controller ID: 65535 (0xffff) 00:15:27.585 Admin Max SQ Size: 128 00:15:27.585 Transport Service Identifier: 4420 00:15:27.585 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:27.585 Transport Address: 10.0.0.3 00:15:27.585 Discovery Log Entry 1 00:15:27.585 ---------------------- 00:15:27.585 Transport Type: 3 (TCP) 00:15:27.585 Address Family: 1 (IPv4) 00:15:27.585 Subsystem Type: 2 (NVM Subsystem) 00:15:27.585 Entry Flags: 00:15:27.585 Duplicate Returned Information: 0 00:15:27.585 Explicit Persistent Connection Support for Discovery: 0 00:15:27.585 Transport Requirements: 00:15:27.585 Secure Channel: Not Required 00:15:27.585 Port ID: 0 (0x0000) 00:15:27.585 Controller ID: 65535 (0xffff) 00:15:27.585 Admin Max SQ Size: 128 00:15:27.585 Transport Service Identifier: 4420 00:15:27.585 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:27.585 Transport Address: 10.0.0.3 [2024-12-11 13:56:20.436531] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:15:27.585 [2024-12-11 13:56:20.436550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1740) on tqpair=0x56d750 00:15:27.585 [2024-12-11 13:56:20.436558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.585 [2024-12-11 13:56:20.436564] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d18c0) on tqpair=0x56d750 00:15:27.585 [2024-12-11 13:56:20.440725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.585 [2024-12-11 13:56:20.440738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1a40) on tqpair=0x56d750 00:15:27.585 [2024-12-11 13:56:20.440743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.585 [2024-12-11 13:56:20.440748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.585 [2024-12-11 13:56:20.440753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.585 [2024-12-11 13:56:20.440773] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.585 [2024-12-11 13:56:20.440779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.585 [2024-12-11 13:56:20.440783] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.585 [2024-12-11 13:56:20.440793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.585 [2024-12-11 13:56:20.440824] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.586 [2024-12-11 13:56:20.440883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.586 [2024-12-11 13:56:20.440892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.586 [2024-12-11 13:56:20.440896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.440901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.586 [2024-12-11 13:56:20.440910] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.440914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.440918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.586 [2024-12-11 13:56:20.440926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.586 [2024-12-11 13:56:20.440952] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.586 [2024-12-11 13:56:20.441020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.586 [2024-12-11 13:56:20.441027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.586 [2024-12-11 13:56:20.441030] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441035] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.586 [2024-12-11 13:56:20.441041] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:15:27.586 [2024-12-11 13:56:20.441046] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:15:27.586 [2024-12-11 13:56:20.441058] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441062] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.586 [2024-12-11 13:56:20.441075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.586 [2024-12-11 13:56:20.441093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.586 [2024-12-11 13:56:20.441145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.586 [2024-12-11 13:56:20.441152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.586 [2024-12-11 13:56:20.441156] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.586 [2024-12-11 13:56:20.441172] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.586 [2024-12-11 13:56:20.441188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.586 [2024-12-11 13:56:20.441206] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.586 [2024-12-11 13:56:20.441255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.586 [2024-12-11 13:56:20.441262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.586 [2024-12-11 13:56:20.441265] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441270] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.586 [2024-12-11 13:56:20.441281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.586 [2024-12-11 13:56:20.441297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.586 [2024-12-11 13:56:20.441315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.586 [2024-12-11 13:56:20.441367] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.586 [2024-12-11 13:56:20.441374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.586 [2024-12-11 13:56:20.441378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.586 [2024-12-11 13:56:20.441393] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441398] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.586 [2024-12-11 13:56:20.441409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.586 [2024-12-11 13:56:20.441427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.586 [2024-12-11 13:56:20.441472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.586 [2024-12-11 13:56:20.441479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.586 [2024-12-11 13:56:20.441483] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.586 [2024-12-11 13:56:20.441498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441503] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.586 [2024-12-11 13:56:20.441514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.586 [2024-12-11 13:56:20.441532] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.586 [2024-12-11 13:56:20.441587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.586 [2024-12-11 13:56:20.441594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.586 [2024-12-11 13:56:20.441597] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.586 [2024-12-11 13:56:20.441613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.586 [2024-12-11 13:56:20.441629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.586 [2024-12-11 13:56:20.441647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.586 [2024-12-11 13:56:20.441713] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.586 [2024-12-11 13:56:20.441722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.586 [2024-12-11 13:56:20.441725] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.586 [2024-12-11 13:56:20.441742] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441747] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441751] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.586 [2024-12-11 13:56:20.441759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.586 [2024-12-11 13:56:20.441779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.586 [2024-12-11 13:56:20.441827] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.586 [2024-12-11 13:56:20.441833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.586 [2024-12-11 13:56:20.441837] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.586 [2024-12-11 13:56:20.441852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.586 [2024-12-11 13:56:20.441869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.586 [2024-12-11 13:56:20.441887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.586 [2024-12-11 13:56:20.441930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.586 [2024-12-11 13:56:20.441937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.586 [2024-12-11 13:56:20.441940] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.586 [2024-12-11 13:56:20.441955] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441960] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.441964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.586 [2024-12-11 13:56:20.441972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.586 [2024-12-11 13:56:20.441990] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.586 [2024-12-11 13:56:20.442036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.586 [2024-12-11 13:56:20.442044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.586 [2024-12-11 13:56:20.442049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.442053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.586 [2024-12-11 13:56:20.442064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.442069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.442074] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.586 [2024-12-11 13:56:20.442081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.586 [2024-12-11 13:56:20.442099] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.586 [2024-12-11 13:56:20.442145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.586 [2024-12-11 13:56:20.442152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.586 [2024-12-11 13:56:20.442155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.442160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.586 [2024-12-11 13:56:20.442171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.442176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.586 [2024-12-11 13:56:20.442180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.586 [2024-12-11 13:56:20.442187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.587 [2024-12-11 13:56:20.442205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.587 [2024-12-11 13:56:20.442261] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.587 [2024-12-11 13:56:20.442268] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.587 [2024-12-11 13:56:20.442272] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.442276] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.587 [2024-12-11 13:56:20.442287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.442292] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.442296] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.587 [2024-12-11 13:56:20.442303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.587 [2024-12-11 13:56:20.442321] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.587 [2024-12-11 13:56:20.442363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.587 [2024-12-11 13:56:20.442370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.587 [2024-12-11 13:56:20.442374] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.442378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.587 [2024-12-11 13:56:20.442389] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.442394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.442398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.587 [2024-12-11 13:56:20.442406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.587 [2024-12-11 13:56:20.442423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.587 [2024-12-11 13:56:20.442467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.587 [2024-12-11 13:56:20.442475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.587 [2024-12-11 13:56:20.442479] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.442483] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.587 [2024-12-11 13:56:20.442495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.442499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.442504] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.587 [2024-12-11 13:56:20.442511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.587 [2024-12-11 13:56:20.442529] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.587 [2024-12-11 13:56:20.442575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.587 [2024-12-11 13:56:20.442582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.587 [2024-12-11 13:56:20.442586] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.442590] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.587 [2024-12-11 13:56:20.442601] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.442606] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.442610] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.587 [2024-12-11 13:56:20.442617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.587 [2024-12-11 13:56:20.442635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.587 [2024-12-11 13:56:20.442681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.587 [2024-12-11 13:56:20.442688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.587 [2024-12-11 13:56:20.442691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.442716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.587 [2024-12-11 13:56:20.442729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.442734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.442738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.587 [2024-12-11 13:56:20.442746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.587 [2024-12-11 13:56:20.442766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.587 [2024-12-11 13:56:20.442820] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.587 [2024-12-11 13:56:20.442827] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.587 [2024-12-11 13:56:20.442830] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.442835] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.587 [2024-12-11 13:56:20.442846] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.442851] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.442855] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.587 [2024-12-11 13:56:20.442862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.587 [2024-12-11 13:56:20.442880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.587 [2024-12-11 13:56:20.442922] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.587 [2024-12-11 13:56:20.442929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.587 [2024-12-11 13:56:20.442933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.442937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.587 [2024-12-11 13:56:20.442948] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.442953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.442957] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.587 [2024-12-11 13:56:20.442965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.587 [2024-12-11 13:56:20.442982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.587 [2024-12-11 13:56:20.443031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.587 [2024-12-11 13:56:20.443049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.587 [2024-12-11 13:56:20.443054] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.443059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.587 [2024-12-11 13:56:20.443071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.443076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.443080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.587 [2024-12-11 13:56:20.443088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.587 [2024-12-11 13:56:20.443117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.587 [2024-12-11 13:56:20.443168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.587 [2024-12-11 13:56:20.443175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.587 [2024-12-11 13:56:20.443179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.443183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.587 [2024-12-11 13:56:20.443194] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.443199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.443204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.587 [2024-12-11 13:56:20.443211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.587 [2024-12-11 13:56:20.443230] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.587 [2024-12-11 13:56:20.443279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.587 [2024-12-11 13:56:20.443286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.587 [2024-12-11 13:56:20.443290] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.443294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.587 [2024-12-11 13:56:20.443305] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.443309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.587 [2024-12-11 13:56:20.443314] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.587 [2024-12-11 13:56:20.443321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.587 [2024-12-11 13:56:20.443339] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.587 [2024-12-11 13:56:20.443384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.587 [2024-12-11 13:56:20.443391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.588 [2024-12-11 13:56:20.443394] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.443399] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.588 [2024-12-11 13:56:20.443410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.443414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.443419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.588 [2024-12-11 13:56:20.443426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.588 [2024-12-11 13:56:20.443443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.588 [2024-12-11 13:56:20.443493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.588 [2024-12-11 13:56:20.443499] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.588 [2024-12-11 13:56:20.443503] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.443507] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.588 [2024-12-11 13:56:20.443528] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.443533] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.443537] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.588 [2024-12-11 13:56:20.443545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.588 [2024-12-11 13:56:20.443562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.588 [2024-12-11 13:56:20.443611] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.588 [2024-12-11 13:56:20.443618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.588 [2024-12-11 13:56:20.443622] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.443626] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.588 [2024-12-11 13:56:20.443637] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.443642] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.443646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.588 [2024-12-11 13:56:20.443653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.588 [2024-12-11 13:56:20.443670] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.588 [2024-12-11 13:56:20.443728] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.588 [2024-12-11 13:56:20.443737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.588 [2024-12-11 13:56:20.443741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.443745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.588 [2024-12-11 13:56:20.443757] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.443762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.443766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.588 [2024-12-11 13:56:20.443773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.588 [2024-12-11 13:56:20.443794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.588 [2024-12-11 13:56:20.443841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.588 [2024-12-11 13:56:20.443848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.588 [2024-12-11 13:56:20.443851] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.443856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.588 [2024-12-11 13:56:20.443867] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.443871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.443876] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.588 [2024-12-11 13:56:20.443883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.588 [2024-12-11 13:56:20.443901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.588 [2024-12-11 13:56:20.443946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.588 [2024-12-11 13:56:20.443954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.588 [2024-12-11 13:56:20.443958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.443962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.588 [2024-12-11 13:56:20.443974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.443978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.443982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.588 [2024-12-11 13:56:20.443990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.588 [2024-12-11 13:56:20.444008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.588 [2024-12-11 13:56:20.444057] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.588 [2024-12-11 13:56:20.444064] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.588 [2024-12-11 13:56:20.444068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.444072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.588 [2024-12-11 13:56:20.444083] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.444088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.444092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.588 [2024-12-11 13:56:20.444100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.588 [2024-12-11 13:56:20.444117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.588 [2024-12-11 13:56:20.444162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.588 [2024-12-11 13:56:20.444169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.588 [2024-12-11 13:56:20.444173] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.444177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.588 [2024-12-11 13:56:20.444188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.444193] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.444197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.588 [2024-12-11 13:56:20.444205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.588 [2024-12-11 13:56:20.444222] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.588 [2024-12-11 13:56:20.444267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.588 [2024-12-11 13:56:20.444274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.588 [2024-12-11 13:56:20.444278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.444282] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.588 [2024-12-11 13:56:20.444293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.444298] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.444302] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.588 [2024-12-11 13:56:20.444309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.588 [2024-12-11 13:56:20.444327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.588 [2024-12-11 13:56:20.444369] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.588 [2024-12-11 13:56:20.444376] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.588 [2024-12-11 13:56:20.444380] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.444384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.588 [2024-12-11 13:56:20.444395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.444400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.444404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.588 [2024-12-11 13:56:20.444411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.588 [2024-12-11 13:56:20.444428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.588 [2024-12-11 13:56:20.444471] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.588 [2024-12-11 13:56:20.444478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.588 [2024-12-11 13:56:20.444482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.444486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.588 [2024-12-11 13:56:20.444497] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.444502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.444506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.588 [2024-12-11 13:56:20.444513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.588 [2024-12-11 13:56:20.444531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.588 [2024-12-11 13:56:20.444577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.588 [2024-12-11 13:56:20.444584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.588 [2024-12-11 13:56:20.444587] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.444592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.588 [2024-12-11 13:56:20.444602] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.444607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.444612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.588 [2024-12-11 13:56:20.444619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.588 [2024-12-11 13:56:20.444637] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.588 [2024-12-11 13:56:20.444680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.588 [2024-12-11 13:56:20.444687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.588 [2024-12-11 13:56:20.444691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.588 [2024-12-11 13:56:20.444695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.588 [2024-12-11 13:56:20.444717] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.444724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.444728] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.589 [2024-12-11 13:56:20.444735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.589 [2024-12-11 13:56:20.444755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.589 [2024-12-11 13:56:20.444798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.589 [2024-12-11 13:56:20.444810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.589 [2024-12-11 13:56:20.444815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.444819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.589 [2024-12-11 13:56:20.444831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.444836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.444840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.589 [2024-12-11 13:56:20.444848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.589 [2024-12-11 13:56:20.444866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.589 [2024-12-11 13:56:20.444912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.589 [2024-12-11 13:56:20.444919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.589 [2024-12-11 13:56:20.444923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.444927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.589 [2024-12-11 13:56:20.444938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.444943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.444947] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.589 [2024-12-11 13:56:20.444955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.589 [2024-12-11 13:56:20.444972] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.589 [2024-12-11 13:56:20.445014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.589 [2024-12-11 13:56:20.445021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.589 [2024-12-11 13:56:20.445025] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.445029] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.589 [2024-12-11 13:56:20.445040] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.445045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.445049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.589 [2024-12-11 13:56:20.445056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.589 [2024-12-11 13:56:20.445074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.589 [2024-12-11 13:56:20.445122] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.589 [2024-12-11 13:56:20.445129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.589 [2024-12-11 13:56:20.445132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.445137] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.589 [2024-12-11 13:56:20.445148] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.445153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.445157] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.589 [2024-12-11 13:56:20.445164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.589 [2024-12-11 13:56:20.445191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.589 [2024-12-11 13:56:20.445251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.589 [2024-12-11 13:56:20.445258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.589 [2024-12-11 13:56:20.445262] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.445266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.589 [2024-12-11 13:56:20.445277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.445282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.445286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.589 [2024-12-11 13:56:20.445294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.589 [2024-12-11 13:56:20.445311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.589 [2024-12-11 13:56:20.445353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.589 [2024-12-11 13:56:20.445360] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.589 [2024-12-11 13:56:20.445364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.445368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.589 [2024-12-11 13:56:20.445379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.445384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.445388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.589 [2024-12-11 13:56:20.445396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.589 [2024-12-11 13:56:20.445413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.589 [2024-12-11 13:56:20.445461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.589 [2024-12-11 13:56:20.445468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.589 [2024-12-11 13:56:20.445472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.445476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.589 [2024-12-11 13:56:20.445487] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.445492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.445496] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.589 [2024-12-11 13:56:20.445504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.589 [2024-12-11 13:56:20.445521] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.589 [2024-12-11 13:56:20.445567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.589 [2024-12-11 13:56:20.445583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.589 [2024-12-11 13:56:20.445588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.445592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.589 [2024-12-11 13:56:20.445604] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.445609] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.445613] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.589 [2024-12-11 13:56:20.445621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.589 [2024-12-11 13:56:20.445640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.589 [2024-12-11 13:56:20.445689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.589 [2024-12-11 13:56:20.449727] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.589 [2024-12-11 13:56:20.449762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.449767] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.589 [2024-12-11 13:56:20.449801] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.449807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.449811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56d750) 00:15:27.589 [2024-12-11 13:56:20.449820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.589 [2024-12-11 13:56:20.449847] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d1bc0, cid 3, qid 0 00:15:27.589 [2024-12-11 13:56:20.449902] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.589 [2024-12-11 13:56:20.449909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.589 [2024-12-11 13:56:20.449913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.589 [2024-12-11 13:56:20.449917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d1bc0) on tqpair=0x56d750 00:15:27.589 [2024-12-11 13:56:20.449926] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 8 milliseconds 00:15:27.589 00:15:27.589 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:27.589 [2024-12-11 13:56:20.494537] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:15:27.589 [2024-12-11 13:56:20.494578] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75449 ] 00:15:27.851 [2024-12-11 13:56:20.658851] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:15:27.851 [2024-12-11 13:56:20.658929] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:27.851 [2024-12-11 13:56:20.658937] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:27.851 [2024-12-11 13:56:20.658951] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:27.851 [2024-12-11 13:56:20.658963] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:27.851 [2024-12-11 13:56:20.659348] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:15:27.851 [2024-12-11 13:56:20.659417] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x10bf750 0 00:15:27.851 [2024-12-11 13:56:20.666721] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:27.851 [2024-12-11 13:56:20.666751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:27.851 [2024-12-11 13:56:20.666759] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:27.851 [2024-12-11 13:56:20.666762] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:27.851 [2024-12-11 13:56:20.666801] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.851 [2024-12-11 13:56:20.666809] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.851 [2024-12-11 13:56:20.666814] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bf750) 00:15:27.851 [2024-12-11 13:56:20.666830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:27.851 [2024-12-11 13:56:20.666862] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123740, cid 0, qid 0 00:15:27.851 [2024-12-11 13:56:20.674727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.851 [2024-12-11 13:56:20.674756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.851 [2024-12-11 13:56:20.674762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.851 [2024-12-11 13:56:20.674768] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123740) on tqpair=0x10bf750 00:15:27.851 [2024-12-11 13:56:20.674782] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:27.851 [2024-12-11 13:56:20.674792] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:15:27.851 [2024-12-11 13:56:20.674800] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:15:27.851 [2024-12-11 13:56:20.674828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.674835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.674839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bf750) 00:15:27.852 [2024-12-11 13:56:20.674852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.852 [2024-12-11 13:56:20.674886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123740, cid 0, qid 0 00:15:27.852 [2024-12-11 13:56:20.674959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.852 [2024-12-11 13:56:20.674967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.852 [2024-12-11 13:56:20.674971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.674975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123740) on tqpair=0x10bf750 00:15:27.852 [2024-12-11 13:56:20.674992] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:15:27.852 [2024-12-11 13:56:20.675001] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:15:27.852 [2024-12-11 13:56:20.675010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.675014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.675018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bf750) 00:15:27.852 [2024-12-11 13:56:20.675027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.852 [2024-12-11 13:56:20.675047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123740, cid 0, qid 0 00:15:27.852 [2024-12-11 13:56:20.675320] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.852 [2024-12-11 13:56:20.675334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.852 [2024-12-11 13:56:20.675338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.675343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123740) on tqpair=0x10bf750 00:15:27.852 [2024-12-11 13:56:20.675350] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:15:27.852 [2024-12-11 13:56:20.675361] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:15:27.852 [2024-12-11 13:56:20.675369] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.675374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.675378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bf750) 00:15:27.852 [2024-12-11 13:56:20.675386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.852 [2024-12-11 13:56:20.675407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123740, cid 0, qid 0 00:15:27.852 [2024-12-11 13:56:20.675466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.852 [2024-12-11 13:56:20.675473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.852 [2024-12-11 13:56:20.675477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.675481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123740) on tqpair=0x10bf750 00:15:27.852 [2024-12-11 13:56:20.675487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:27.852 [2024-12-11 13:56:20.675498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.675503] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.675507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bf750) 00:15:27.852 [2024-12-11 13:56:20.675515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.852 [2024-12-11 13:56:20.675534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123740, cid 0, qid 0 00:15:27.852 [2024-12-11 13:56:20.675886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.852 [2024-12-11 13:56:20.675904] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.852 [2024-12-11 13:56:20.675908] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.675913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123740) on tqpair=0x10bf750 00:15:27.852 [2024-12-11 13:56:20.675919] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:15:27.852 [2024-12-11 13:56:20.675925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:15:27.852 [2024-12-11 13:56:20.675934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:27.852 [2024-12-11 13:56:20.676047] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:15:27.852 [2024-12-11 13:56:20.676053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:27.852 [2024-12-11 13:56:20.676064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.676068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.676072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bf750) 00:15:27.852 [2024-12-11 13:56:20.676080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.852 [2024-12-11 13:56:20.676104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123740, cid 0, qid 0 00:15:27.852 [2024-12-11 13:56:20.676378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.852 [2024-12-11 13:56:20.676394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.852 [2024-12-11 13:56:20.676399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.676403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123740) on tqpair=0x10bf750 00:15:27.852 [2024-12-11 13:56:20.676409] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:27.852 [2024-12-11 13:56:20.676421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.676426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.676430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bf750) 00:15:27.852 [2024-12-11 13:56:20.676438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.852 [2024-12-11 13:56:20.676458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123740, cid 0, qid 0 00:15:27.852 [2024-12-11 13:56:20.676572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.852 [2024-12-11 13:56:20.676580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.852 [2024-12-11 13:56:20.676583] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.676588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123740) on tqpair=0x10bf750 00:15:27.852 [2024-12-11 13:56:20.676593] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:27.852 [2024-12-11 13:56:20.676598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:15:27.852 [2024-12-11 13:56:20.676607] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:15:27.852 [2024-12-11 13:56:20.676619] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:15:27.852 [2024-12-11 13:56:20.676633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.676638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bf750) 00:15:27.852 [2024-12-11 13:56:20.676647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.852 [2024-12-11 13:56:20.676667] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123740, cid 0, qid 0 00:15:27.852 [2024-12-11 13:56:20.677039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.852 [2024-12-11 13:56:20.677057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.852 [2024-12-11 13:56:20.677062] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.677066] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bf750): datao=0, datal=4096, cccid=0 00:15:27.852 [2024-12-11 13:56:20.677072] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1123740) on tqpair(0x10bf750): expected_datao=0, payload_size=4096 00:15:27.852 [2024-12-11 13:56:20.677077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.677087] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.677092] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.677102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.852 [2024-12-11 13:56:20.677108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.852 [2024-12-11 13:56:20.677112] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.677116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123740) on tqpair=0x10bf750 00:15:27.852 [2024-12-11 13:56:20.677127] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:15:27.852 [2024-12-11 13:56:20.677133] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:15:27.852 [2024-12-11 13:56:20.677137] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:15:27.852 [2024-12-11 13:56:20.677142] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:15:27.852 [2024-12-11 13:56:20.677147] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:15:27.852 [2024-12-11 13:56:20.677153] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:15:27.852 [2024-12-11 13:56:20.677163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:15:27.852 [2024-12-11 13:56:20.677171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.677176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.677180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bf750) 00:15:27.852 [2024-12-11 13:56:20.677189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:27.852 [2024-12-11 13:56:20.677212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123740, cid 0, qid 0 00:15:27.852 [2024-12-11 13:56:20.677587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.852 [2024-12-11 13:56:20.677603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.852 [2024-12-11 13:56:20.677608] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.852 [2024-12-11 13:56:20.677612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123740) on tqpair=0x10bf750 00:15:27.853 [2024-12-11 13:56:20.677621] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.677626] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.677630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bf750) 00:15:27.853 [2024-12-11 13:56:20.677638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.853 [2024-12-11 13:56:20.677646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.677650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.677654] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x10bf750) 00:15:27.853 [2024-12-11 13:56:20.677660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.853 [2024-12-11 13:56:20.677667] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.677672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.677675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x10bf750) 00:15:27.853 [2024-12-11 13:56:20.677682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.853 [2024-12-11 13:56:20.677688] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.677693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.677696] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bf750) 00:15:27.853 [2024-12-11 13:56:20.677736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.853 [2024-12-11 13:56:20.677743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:27.853 [2024-12-11 13:56:20.677761] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:27.853 [2024-12-11 13:56:20.677770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.677775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bf750) 00:15:27.853 [2024-12-11 13:56:20.677782] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.853 [2024-12-11 13:56:20.677810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123740, cid 0, qid 0 00:15:27.853 [2024-12-11 13:56:20.677818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11238c0, cid 1, qid 0 00:15:27.853 [2024-12-11 13:56:20.677824] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123a40, cid 2, qid 0 00:15:27.853 [2024-12-11 13:56:20.677829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123bc0, cid 3, qid 0 00:15:27.853 [2024-12-11 13:56:20.677834] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123d40, cid 4, qid 0 00:15:27.853 [2024-12-11 13:56:20.678300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.853 [2024-12-11 13:56:20.678317] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.853 [2024-12-11 13:56:20.678322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.678327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123d40) on tqpair=0x10bf750 00:15:27.853 [2024-12-11 13:56:20.678334] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:15:27.853 [2024-12-11 13:56:20.678340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:27.853 [2024-12-11 13:56:20.678355] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:15:27.853 [2024-12-11 13:56:20.678363] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:27.853 [2024-12-11 13:56:20.678371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.678376] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.678380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bf750) 00:15:27.853 [2024-12-11 13:56:20.678388] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:27.853 [2024-12-11 13:56:20.678410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123d40, cid 4, qid 0 00:15:27.853 [2024-12-11 13:56:20.678565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.853 [2024-12-11 13:56:20.678572] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.853 [2024-12-11 13:56:20.678576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.678580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123d40) on tqpair=0x10bf750 00:15:27.853 [2024-12-11 13:56:20.678646] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:15:27.853 [2024-12-11 13:56:20.678659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:27.853 [2024-12-11 13:56:20.678670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.678674] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bf750) 00:15:27.853 [2024-12-11 13:56:20.678682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.853 [2024-12-11 13:56:20.682712] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123d40, cid 4, qid 0 00:15:27.853 [2024-12-11 13:56:20.682746] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.853 [2024-12-11 13:56:20.682755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.853 [2024-12-11 13:56:20.682759] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.682763] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bf750): datao=0, datal=4096, cccid=4 00:15:27.853 [2024-12-11 13:56:20.682769] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1123d40) on tqpair(0x10bf750): expected_datao=0, payload_size=4096 00:15:27.853 [2024-12-11 13:56:20.682774] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.682783] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.682787] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.682794] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.853 [2024-12-11 13:56:20.682800] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.853 [2024-12-11 13:56:20.682804] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.682808] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123d40) on tqpair=0x10bf750 00:15:27.853 [2024-12-11 13:56:20.682838] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:15:27.853 [2024-12-11 13:56:20.682853] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:15:27.853 [2024-12-11 13:56:20.682868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:15:27.853 [2024-12-11 13:56:20.682878] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.682883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bf750) 00:15:27.853 [2024-12-11 13:56:20.682893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.853 [2024-12-11 13:56:20.682920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123d40, cid 4, qid 0 00:15:27.853 [2024-12-11 13:56:20.683258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.853 [2024-12-11 13:56:20.683278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.853 [2024-12-11 13:56:20.683283] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.683288] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bf750): datao=0, datal=4096, cccid=4 00:15:27.853 [2024-12-11 13:56:20.683293] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1123d40) on tqpair(0x10bf750): expected_datao=0, payload_size=4096 00:15:27.853 [2024-12-11 13:56:20.683298] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.683306] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.683310] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.683373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.853 [2024-12-11 13:56:20.683384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.853 [2024-12-11 13:56:20.683388] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.683393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123d40) on tqpair=0x10bf750 00:15:27.853 [2024-12-11 13:56:20.683413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:27.853 [2024-12-11 13:56:20.683426] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:27.853 [2024-12-11 13:56:20.683437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.683442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bf750) 00:15:27.853 [2024-12-11 13:56:20.683451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.853 [2024-12-11 13:56:20.683482] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123d40, cid 4, qid 0 00:15:27.853 [2024-12-11 13:56:20.683896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.853 [2024-12-11 13:56:20.683906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.853 [2024-12-11 13:56:20.683910] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.683914] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bf750): datao=0, datal=4096, cccid=4 00:15:27.853 [2024-12-11 13:56:20.683920] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1123d40) on tqpair(0x10bf750): expected_datao=0, payload_size=4096 00:15:27.853 [2024-12-11 13:56:20.683925] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.683932] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.683936] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.683945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.853 [2024-12-11 13:56:20.683952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.853 [2024-12-11 13:56:20.683955] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.853 [2024-12-11 13:56:20.683960] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123d40) on tqpair=0x10bf750 00:15:27.853 [2024-12-11 13:56:20.683970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:27.853 [2024-12-11 13:56:20.683980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:15:27.853 [2024-12-11 13:56:20.683997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:15:27.854 [2024-12-11 13:56:20.684009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:27.854 [2024-12-11 13:56:20.684015] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:27.854 [2024-12-11 13:56:20.684021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:15:27.854 [2024-12-11 13:56:20.684027] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:15:27.854 [2024-12-11 13:56:20.684032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:15:27.854 [2024-12-11 13:56:20.684039] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:15:27.854 [2024-12-11 13:56:20.684061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.684066] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bf750) 00:15:27.854 [2024-12-11 13:56:20.684075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.854 [2024-12-11 13:56:20.684083] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.684087] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.684091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10bf750) 00:15:27.854 [2024-12-11 13:56:20.684097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.854 [2024-12-11 13:56:20.684127] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123d40, cid 4, qid 0 00:15:27.854 [2024-12-11 13:56:20.684135] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123ec0, cid 5, qid 0 00:15:27.854 [2024-12-11 13:56:20.684531] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.854 [2024-12-11 13:56:20.684548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.854 [2024-12-11 13:56:20.684553] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.684558] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123d40) on tqpair=0x10bf750 00:15:27.854 [2024-12-11 13:56:20.684565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.854 [2024-12-11 13:56:20.684571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.854 [2024-12-11 13:56:20.684575] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.684579] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123ec0) on tqpair=0x10bf750 00:15:27.854 [2024-12-11 13:56:20.684591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.684596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10bf750) 00:15:27.854 [2024-12-11 13:56:20.684604] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.854 [2024-12-11 13:56:20.684624] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123ec0, cid 5, qid 0 00:15:27.854 [2024-12-11 13:56:20.684681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.854 [2024-12-11 13:56:20.684688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.854 [2024-12-11 13:56:20.684691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.684695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123ec0) on tqpair=0x10bf750 00:15:27.854 [2024-12-11 13:56:20.684720] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.684726] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10bf750) 00:15:27.854 [2024-12-11 13:56:20.684733] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.854 [2024-12-11 13:56:20.684752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123ec0, cid 5, qid 0 00:15:27.854 [2024-12-11 13:56:20.685173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.854 [2024-12-11 13:56:20.685189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.854 [2024-12-11 13:56:20.685193] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.685198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123ec0) on tqpair=0x10bf750 00:15:27.854 [2024-12-11 13:56:20.685210] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.685215] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10bf750) 00:15:27.854 [2024-12-11 13:56:20.685223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.854 [2024-12-11 13:56:20.685242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123ec0, cid 5, qid 0 00:15:27.854 [2024-12-11 13:56:20.685293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.854 [2024-12-11 13:56:20.685300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.854 [2024-12-11 13:56:20.685304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.685308] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123ec0) on tqpair=0x10bf750 00:15:27.854 [2024-12-11 13:56:20.685330] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.685336] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10bf750) 00:15:27.854 [2024-12-11 13:56:20.685344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.854 [2024-12-11 13:56:20.685352] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.685357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bf750) 00:15:27.854 [2024-12-11 13:56:20.685364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.854 [2024-12-11 13:56:20.685372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.685376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x10bf750) 00:15:27.854 [2024-12-11 13:56:20.685383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.854 [2024-12-11 13:56:20.685392] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.685396] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x10bf750) 00:15:27.854 [2024-12-11 13:56:20.685403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.854 [2024-12-11 13:56:20.685424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123ec0, cid 5, qid 0 00:15:27.854 [2024-12-11 13:56:20.685431] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123d40, cid 4, qid 0 00:15:27.854 [2024-12-11 13:56:20.685436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1124040, cid 6, qid 0 00:15:27.854 [2024-12-11 13:56:20.685441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11241c0, cid 7, qid 0 00:15:27.854 [2024-12-11 13:56:20.685839] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.854 [2024-12-11 13:56:20.685855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.854 [2024-12-11 13:56:20.685860] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.685864] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bf750): datao=0, datal=8192, cccid=5 00:15:27.854 [2024-12-11 13:56:20.685869] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1123ec0) on tqpair(0x10bf750): expected_datao=0, payload_size=8192 00:15:27.854 [2024-12-11 13:56:20.685874] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.685893] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.685898] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.685904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.854 [2024-12-11 13:56:20.685910] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.854 [2024-12-11 13:56:20.685914] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.685918] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bf750): datao=0, datal=512, cccid=4 00:15:27.854 [2024-12-11 13:56:20.685922] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1123d40) on tqpair(0x10bf750): expected_datao=0, payload_size=512 00:15:27.854 [2024-12-11 13:56:20.685927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.685934] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.685938] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.685943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.854 [2024-12-11 13:56:20.685949] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.854 [2024-12-11 13:56:20.685953] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.685956] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bf750): datao=0, datal=512, cccid=6 00:15:27.854 [2024-12-11 13:56:20.685961] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1124040) on tqpair(0x10bf750): expected_datao=0, payload_size=512 00:15:27.854 [2024-12-11 13:56:20.685965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.685972] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.685976] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.685981] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.854 [2024-12-11 13:56:20.685987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.854 [2024-12-11 13:56:20.685991] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.685994] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bf750): datao=0, datal=4096, cccid=7 00:15:27.854 [2024-12-11 13:56:20.686008] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11241c0) on tqpair(0x10bf750): expected_datao=0, payload_size=4096 00:15:27.854 [2024-12-11 13:56:20.686013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.686020] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.686024] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.686030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.854 [2024-12-11 13:56:20.686035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.854 [2024-12-11 13:56:20.686039] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.686053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123ec0) on tqpair=0x10bf750 00:15:27.854 [2024-12-11 13:56:20.686070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.854 [2024-12-11 13:56:20.686078] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.854 [2024-12-11 13:56:20.686081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.854 [2024-12-11 13:56:20.686085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123d40) on tqpair=0x10bf750 00:15:27.854 [2024-12-11 13:56:20.686099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.854 [2024-12-11 13:56:20.686105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.854 [2024-12-11 13:56:20.686109] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.855 [2024-12-11 13:56:20.686113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1124040) on tqpair=0x10bf750 00:15:27.855 [2024-12-11 13:56:20.686121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.855 [2024-12-11 13:56:20.686127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.855 [2024-12-11 13:56:20.686131] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.855 [2024-12-11 13:56:20.686135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11241c0) on tqpair=0x10bf750 00:15:27.855 ===================================================== 00:15:27.855 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:27.855 ===================================================== 00:15:27.855 Controller Capabilities/Features 00:15:27.855 ================================ 00:15:27.855 Vendor ID: 8086 00:15:27.855 Subsystem Vendor ID: 8086 00:15:27.855 Serial Number: SPDK00000000000001 00:15:27.855 Model Number: SPDK bdev Controller 00:15:27.855 Firmware Version: 25.01 00:15:27.855 Recommended Arb Burst: 6 00:15:27.855 IEEE OUI Identifier: e4 d2 5c 00:15:27.855 Multi-path I/O 00:15:27.855 May have multiple subsystem ports: Yes 00:15:27.855 May have multiple controllers: Yes 00:15:27.855 Associated with SR-IOV VF: No 00:15:27.855 Max Data Transfer Size: 131072 00:15:27.855 Max Number of Namespaces: 32 00:15:27.855 Max Number of I/O Queues: 127 00:15:27.855 NVMe Specification Version (VS): 1.3 00:15:27.855 NVMe Specification Version (Identify): 1.3 00:15:27.855 Maximum Queue Entries: 128 00:15:27.855 Contiguous Queues Required: Yes 00:15:27.855 Arbitration Mechanisms Supported 00:15:27.855 Weighted Round Robin: Not Supported 00:15:27.855 Vendor Specific: Not Supported 00:15:27.855 Reset Timeout: 15000 ms 00:15:27.855 Doorbell Stride: 4 bytes 00:15:27.855 NVM Subsystem Reset: Not Supported 00:15:27.855 Command Sets Supported 00:15:27.855 NVM Command Set: Supported 00:15:27.855 Boot Partition: Not Supported 00:15:27.855 Memory Page Size Minimum: 4096 bytes 00:15:27.855 Memory Page Size Maximum: 4096 bytes 00:15:27.855 Persistent Memory Region: Not Supported 00:15:27.855 Optional Asynchronous Events Supported 00:15:27.855 Namespace Attribute Notices: Supported 00:15:27.855 Firmware Activation Notices: Not Supported 00:15:27.855 ANA Change Notices: Not Supported 00:15:27.855 PLE Aggregate Log Change Notices: Not Supported 00:15:27.855 LBA Status Info Alert Notices: Not Supported 00:15:27.855 EGE Aggregate Log Change Notices: Not Supported 00:15:27.855 Normal NVM Subsystem Shutdown event: Not Supported 00:15:27.855 Zone Descriptor Change Notices: Not Supported 00:15:27.855 Discovery Log Change Notices: Not Supported 00:15:27.855 Controller Attributes 00:15:27.855 128-bit Host Identifier: Supported 00:15:27.855 Non-Operational Permissive Mode: Not Supported 00:15:27.855 NVM Sets: Not Supported 00:15:27.855 Read Recovery Levels: Not Supported 00:15:27.855 Endurance Groups: Not Supported 00:15:27.855 Predictable Latency Mode: Not Supported 00:15:27.855 Traffic Based Keep ALive: Not Supported 00:15:27.855 Namespace Granularity: Not Supported 00:15:27.855 SQ Associations: Not Supported 00:15:27.855 UUID List: Not Supported 00:15:27.855 Multi-Domain Subsystem: Not Supported 00:15:27.855 Fixed Capacity Management: Not Supported 00:15:27.855 Variable Capacity Management: Not Supported 00:15:27.855 Delete Endurance Group: Not Supported 00:15:27.855 Delete NVM Set: Not Supported 00:15:27.855 Extended LBA Formats Supported: Not Supported 00:15:27.855 Flexible Data Placement Supported: Not Supported 00:15:27.855 00:15:27.855 Controller Memory Buffer Support 00:15:27.855 ================================ 00:15:27.855 Supported: No 00:15:27.855 00:15:27.855 Persistent Memory Region Support 00:15:27.855 ================================ 00:15:27.855 Supported: No 00:15:27.855 00:15:27.855 Admin Command Set Attributes 00:15:27.855 ============================ 00:15:27.855 Security Send/Receive: Not Supported 00:15:27.855 Format NVM: Not Supported 00:15:27.855 Firmware Activate/Download: Not Supported 00:15:27.855 Namespace Management: Not Supported 00:15:27.855 Device Self-Test: Not Supported 00:15:27.855 Directives: Not Supported 00:15:27.855 NVMe-MI: Not Supported 00:15:27.855 Virtualization Management: Not Supported 00:15:27.855 Doorbell Buffer Config: Not Supported 00:15:27.855 Get LBA Status Capability: Not Supported 00:15:27.855 Command & Feature Lockdown Capability: Not Supported 00:15:27.855 Abort Command Limit: 4 00:15:27.855 Async Event Request Limit: 4 00:15:27.855 Number of Firmware Slots: N/A 00:15:27.855 Firmware Slot 1 Read-Only: N/A 00:15:27.855 Firmware Activation Without Reset: N/A 00:15:27.855 Multiple Update Detection Support: N/A 00:15:27.855 Firmware Update Granularity: No Information Provided 00:15:27.855 Per-Namespace SMART Log: No 00:15:27.855 Asymmetric Namespace Access Log Page: Not Supported 00:15:27.855 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:27.855 Command Effects Log Page: Supported 00:15:27.855 Get Log Page Extended Data: Supported 00:15:27.855 Telemetry Log Pages: Not Supported 00:15:27.855 Persistent Event Log Pages: Not Supported 00:15:27.855 Supported Log Pages Log Page: May Support 00:15:27.855 Commands Supported & Effects Log Page: Not Supported 00:15:27.855 Feature Identifiers & Effects Log Page:May Support 00:15:27.855 NVMe-MI Commands & Effects Log Page: May Support 00:15:27.855 Data Area 4 for Telemetry Log: Not Supported 00:15:27.855 Error Log Page Entries Supported: 128 00:15:27.855 Keep Alive: Supported 00:15:27.855 Keep Alive Granularity: 10000 ms 00:15:27.855 00:15:27.855 NVM Command Set Attributes 00:15:27.855 ========================== 00:15:27.855 Submission Queue Entry Size 00:15:27.855 Max: 64 00:15:27.855 Min: 64 00:15:27.855 Completion Queue Entry Size 00:15:27.855 Max: 16 00:15:27.855 Min: 16 00:15:27.855 Number of Namespaces: 32 00:15:27.855 Compare Command: Supported 00:15:27.855 Write Uncorrectable Command: Not Supported 00:15:27.855 Dataset Management Command: Supported 00:15:27.855 Write Zeroes Command: Supported 00:15:27.855 Set Features Save Field: Not Supported 00:15:27.855 Reservations: Supported 00:15:27.855 Timestamp: Not Supported 00:15:27.855 Copy: Supported 00:15:27.855 Volatile Write Cache: Present 00:15:27.855 Atomic Write Unit (Normal): 1 00:15:27.855 Atomic Write Unit (PFail): 1 00:15:27.855 Atomic Compare & Write Unit: 1 00:15:27.855 Fused Compare & Write: Supported 00:15:27.855 Scatter-Gather List 00:15:27.855 SGL Command Set: Supported 00:15:27.855 SGL Keyed: Supported 00:15:27.855 SGL Bit Bucket Descriptor: Not Supported 00:15:27.855 SGL Metadata Pointer: Not Supported 00:15:27.855 Oversized SGL: Not Supported 00:15:27.855 SGL Metadata Address: Not Supported 00:15:27.855 SGL Offset: Supported 00:15:27.855 Transport SGL Data Block: Not Supported 00:15:27.855 Replay Protected Memory Block: Not Supported 00:15:27.855 00:15:27.855 Firmware Slot Information 00:15:27.855 ========================= 00:15:27.855 Active slot: 1 00:15:27.855 Slot 1 Firmware Revision: 25.01 00:15:27.855 00:15:27.855 00:15:27.855 Commands Supported and Effects 00:15:27.855 ============================== 00:15:27.855 Admin Commands 00:15:27.855 -------------- 00:15:27.855 Get Log Page (02h): Supported 00:15:27.855 Identify (06h): Supported 00:15:27.855 Abort (08h): Supported 00:15:27.855 Set Features (09h): Supported 00:15:27.855 Get Features (0Ah): Supported 00:15:27.855 Asynchronous Event Request (0Ch): Supported 00:15:27.855 Keep Alive (18h): Supported 00:15:27.855 I/O Commands 00:15:27.855 ------------ 00:15:27.855 Flush (00h): Supported LBA-Change 00:15:27.855 Write (01h): Supported LBA-Change 00:15:27.855 Read (02h): Supported 00:15:27.855 Compare (05h): Supported 00:15:27.855 Write Zeroes (08h): Supported LBA-Change 00:15:27.855 Dataset Management (09h): Supported LBA-Change 00:15:27.855 Copy (19h): Supported LBA-Change 00:15:27.855 00:15:27.855 Error Log 00:15:27.855 ========= 00:15:27.855 00:15:27.855 Arbitration 00:15:27.855 =========== 00:15:27.855 Arbitration Burst: 1 00:15:27.855 00:15:27.855 Power Management 00:15:27.855 ================ 00:15:27.855 Number of Power States: 1 00:15:27.855 Current Power State: Power State #0 00:15:27.855 Power State #0: 00:15:27.855 Max Power: 0.00 W 00:15:27.855 Non-Operational State: Operational 00:15:27.855 Entry Latency: Not Reported 00:15:27.855 Exit Latency: Not Reported 00:15:27.855 Relative Read Throughput: 0 00:15:27.855 Relative Read Latency: 0 00:15:27.855 Relative Write Throughput: 0 00:15:27.855 Relative Write Latency: 0 00:15:27.855 Idle Power: Not Reported 00:15:27.855 Active Power: Not Reported 00:15:27.855 Non-Operational Permissive Mode: Not Supported 00:15:27.855 00:15:27.855 Health Information 00:15:27.855 ================== 00:15:27.855 Critical Warnings: 00:15:27.855 Available Spare Space: OK 00:15:27.855 Temperature: OK 00:15:27.855 Device Reliability: OK 00:15:27.855 Read Only: No 00:15:27.855 Volatile Memory Backup: OK 00:15:27.855 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:27.855 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:27.855 Available Spare: 0% 00:15:27.856 Available Spare Threshold: 0% 00:15:27.856 Life Percentage Used:[2024-12-11 13:56:20.686252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.686260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x10bf750) 00:15:27.856 [2024-12-11 13:56:20.686269] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.856 [2024-12-11 13:56:20.686295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11241c0, cid 7, qid 0 00:15:27.856 [2024-12-11 13:56:20.686407] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.856 [2024-12-11 13:56:20.686414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.856 [2024-12-11 13:56:20.686418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.686422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11241c0) on tqpair=0x10bf750 00:15:27.856 [2024-12-11 13:56:20.686469] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:15:27.856 [2024-12-11 13:56:20.686482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123740) on tqpair=0x10bf750 00:15:27.856 [2024-12-11 13:56:20.686490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.856 [2024-12-11 13:56:20.686496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11238c0) on tqpair=0x10bf750 00:15:27.856 [2024-12-11 13:56:20.686501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.856 [2024-12-11 13:56:20.686507] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123a40) on tqpair=0x10bf750 00:15:27.856 [2024-12-11 13:56:20.686512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.856 [2024-12-11 13:56:20.686517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123bc0) on tqpair=0x10bf750 00:15:27.856 [2024-12-11 13:56:20.686522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.856 [2024-12-11 13:56:20.686533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.686538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.686542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bf750) 00:15:27.856 [2024-12-11 13:56:20.686550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.856 [2024-12-11 13:56:20.686574] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123bc0, cid 3, qid 0 00:15:27.856 [2024-12-11 13:56:20.690724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.856 [2024-12-11 13:56:20.690748] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.856 [2024-12-11 13:56:20.690753] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.690758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123bc0) on tqpair=0x10bf750 00:15:27.856 [2024-12-11 13:56:20.690770] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.690774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.690778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bf750) 00:15:27.856 [2024-12-11 13:56:20.690788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.856 [2024-12-11 13:56:20.690819] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123bc0, cid 3, qid 0 00:15:27.856 [2024-12-11 13:56:20.690896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.856 [2024-12-11 13:56:20.690903] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.856 [2024-12-11 13:56:20.690907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.690911] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123bc0) on tqpair=0x10bf750 00:15:27.856 [2024-12-11 13:56:20.690917] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:15:27.856 [2024-12-11 13:56:20.690922] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:15:27.856 [2024-12-11 13:56:20.690933] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.690938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.690942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bf750) 00:15:27.856 [2024-12-11 13:56:20.690950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.856 [2024-12-11 13:56:20.690969] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123bc0, cid 3, qid 0 00:15:27.856 [2024-12-11 13:56:20.691352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.856 [2024-12-11 13:56:20.691368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.856 [2024-12-11 13:56:20.691373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.691377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123bc0) on tqpair=0x10bf750 00:15:27.856 [2024-12-11 13:56:20.691390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.691395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.691399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bf750) 00:15:27.856 [2024-12-11 13:56:20.691407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.856 [2024-12-11 13:56:20.691427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123bc0, cid 3, qid 0 00:15:27.856 [2024-12-11 13:56:20.691478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.856 [2024-12-11 13:56:20.691485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.856 [2024-12-11 13:56:20.691489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.691493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123bc0) on tqpair=0x10bf750 00:15:27.856 [2024-12-11 13:56:20.691504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.691509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.691513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bf750) 00:15:27.856 [2024-12-11 13:56:20.691521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.856 [2024-12-11 13:56:20.691538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123bc0, cid 3, qid 0 00:15:27.856 [2024-12-11 13:56:20.691642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.856 [2024-12-11 13:56:20.691656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.856 [2024-12-11 13:56:20.691661] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.691665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123bc0) on tqpair=0x10bf750 00:15:27.856 [2024-12-11 13:56:20.691677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.691682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.691686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bf750) 00:15:27.856 [2024-12-11 13:56:20.691694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.856 [2024-12-11 13:56:20.691725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123bc0, cid 3, qid 0 00:15:27.856 [2024-12-11 13:56:20.692031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.856 [2024-12-11 13:56:20.692046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.856 [2024-12-11 13:56:20.692051] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.692055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123bc0) on tqpair=0x10bf750 00:15:27.856 [2024-12-11 13:56:20.692067] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.692072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.692076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bf750) 00:15:27.856 [2024-12-11 13:56:20.692084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.856 [2024-12-11 13:56:20.692103] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123bc0, cid 3, qid 0 00:15:27.856 [2024-12-11 13:56:20.692373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.856 [2024-12-11 13:56:20.692388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.856 [2024-12-11 13:56:20.692392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.692397] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123bc0) on tqpair=0x10bf750 00:15:27.856 [2024-12-11 13:56:20.692408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.692414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.856 [2024-12-11 13:56:20.692417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bf750) 00:15:27.856 [2024-12-11 13:56:20.692425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.856 [2024-12-11 13:56:20.692449] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123bc0, cid 3, qid 0 00:15:27.856 [2024-12-11 13:56:20.692723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.857 [2024-12-11 13:56:20.692739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.857 [2024-12-11 13:56:20.692744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.692748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123bc0) on tqpair=0x10bf750 00:15:27.857 [2024-12-11 13:56:20.692760] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.692765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.692769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bf750) 00:15:27.857 [2024-12-11 13:56:20.692777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.857 [2024-12-11 13:56:20.692798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123bc0, cid 3, qid 0 00:15:27.857 [2024-12-11 13:56:20.693028] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.857 [2024-12-11 13:56:20.693042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.857 [2024-12-11 13:56:20.693047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.693051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123bc0) on tqpair=0x10bf750 00:15:27.857 [2024-12-11 13:56:20.693063] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.693068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.693072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bf750) 00:15:27.857 [2024-12-11 13:56:20.693080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.857 [2024-12-11 13:56:20.693099] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123bc0, cid 3, qid 0 00:15:27.857 [2024-12-11 13:56:20.693356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.857 [2024-12-11 13:56:20.693367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.857 [2024-12-11 13:56:20.693372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.693376] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123bc0) on tqpair=0x10bf750 00:15:27.857 [2024-12-11 13:56:20.693388] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.693393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.693397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bf750) 00:15:27.857 [2024-12-11 13:56:20.693405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.857 [2024-12-11 13:56:20.693423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123bc0, cid 3, qid 0 00:15:27.857 [2024-12-11 13:56:20.693683] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.857 [2024-12-11 13:56:20.693707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.857 [2024-12-11 13:56:20.693713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.693718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123bc0) on tqpair=0x10bf750 00:15:27.857 [2024-12-11 13:56:20.693730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.693736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.693740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bf750) 00:15:27.857 [2024-12-11 13:56:20.693747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.857 [2024-12-11 13:56:20.693772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123bc0, cid 3, qid 0 00:15:27.857 [2024-12-11 13:56:20.694036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.857 [2024-12-11 13:56:20.694047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.857 [2024-12-11 13:56:20.694052] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.694056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123bc0) on tqpair=0x10bf750 00:15:27.857 [2024-12-11 13:56:20.694068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.694073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.694077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bf750) 00:15:27.857 [2024-12-11 13:56:20.694084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.857 [2024-12-11 13:56:20.694103] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123bc0, cid 3, qid 0 00:15:27.857 [2024-12-11 13:56:20.694356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.857 [2024-12-11 13:56:20.694368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.857 [2024-12-11 13:56:20.694372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.694376] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123bc0) on tqpair=0x10bf750 00:15:27.857 [2024-12-11 13:56:20.694387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.694392] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.694396] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bf750) 00:15:27.857 [2024-12-11 13:56:20.694404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.857 [2024-12-11 13:56:20.694422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123bc0, cid 3, qid 0 00:15:27.857 [2024-12-11 13:56:20.694680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.857 [2024-12-11 13:56:20.694691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.857 [2024-12-11 13:56:20.694696] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.698724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123bc0) on tqpair=0x10bf750 00:15:27.857 [2024-12-11 13:56:20.698749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.698755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.698759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bf750) 00:15:27.857 [2024-12-11 13:56:20.698769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.857 [2024-12-11 13:56:20.698807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1123bc0, cid 3, qid 0 00:15:27.857 [2024-12-11 13:56:20.698869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.857 [2024-12-11 13:56:20.698876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.857 [2024-12-11 13:56:20.698880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.857 [2024-12-11 13:56:20.698884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1123bc0) on tqpair=0x10bf750 00:15:27.857 [2024-12-11 13:56:20.698893] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:15:27.857 0% 00:15:27.857 Data Units Read: 0 00:15:27.857 Data Units Written: 0 00:15:27.857 Host Read Commands: 0 00:15:27.857 Host Write Commands: 0 00:15:27.857 Controller Busy Time: 0 minutes 00:15:27.857 Power Cycles: 0 00:15:27.857 Power On Hours: 0 hours 00:15:27.857 Unsafe Shutdowns: 0 00:15:27.857 Unrecoverable Media Errors: 0 00:15:27.857 Lifetime Error Log Entries: 0 00:15:27.857 Warning Temperature Time: 0 minutes 00:15:27.857 Critical Temperature Time: 0 minutes 00:15:27.857 00:15:27.857 Number of Queues 00:15:27.857 ================ 00:15:27.857 Number of I/O Submission Queues: 127 00:15:27.857 Number of I/O Completion Queues: 127 00:15:27.857 00:15:27.857 Active Namespaces 00:15:27.857 ================= 00:15:27.857 Namespace ID:1 00:15:27.857 Error Recovery Timeout: Unlimited 00:15:27.857 Command Set Identifier: NVM (00h) 00:15:27.857 Deallocate: Supported 00:15:27.857 Deallocated/Unwritten Error: Not Supported 00:15:27.857 Deallocated Read Value: Unknown 00:15:27.857 Deallocate in Write Zeroes: Not Supported 00:15:27.857 Deallocated Guard Field: 0xFFFF 00:15:27.857 Flush: Supported 00:15:27.857 Reservation: Supported 00:15:27.857 Namespace Sharing Capabilities: Multiple Controllers 00:15:27.857 Size (in LBAs): 131072 (0GiB) 00:15:27.857 Capacity (in LBAs): 131072 (0GiB) 00:15:27.857 Utilization (in LBAs): 131072 (0GiB) 00:15:27.857 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:27.857 EUI64: ABCDEF0123456789 00:15:27.857 UUID: 726c0b82-07c6-4402-bba8-3e2d4b4f4f35 00:15:27.857 Thin Provisioning: Not Supported 00:15:27.857 Per-NS Atomic Units: Yes 00:15:27.857 Atomic Boundary Size (Normal): 0 00:15:27.857 Atomic Boundary Size (PFail): 0 00:15:27.857 Atomic Boundary Offset: 0 00:15:27.857 Maximum Single Source Range Length: 65535 00:15:27.857 Maximum Copy Length: 65535 00:15:27.857 Maximum Source Range Count: 1 00:15:27.857 NGUID/EUI64 Never Reused: No 00:15:27.857 Namespace Write Protected: No 00:15:27.857 Number of LBA Formats: 1 00:15:27.857 Current LBA Format: LBA Format #00 00:15:27.857 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:27.857 00:15:27.857 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:27.857 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.857 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.857 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.857 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.857 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:27.857 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:27.857 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:27.857 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:15:27.857 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:27.857 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:15:27.857 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:27.857 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:27.857 rmmod nvme_tcp 00:15:27.857 rmmod nvme_fabrics 00:15:27.857 rmmod nvme_keyring 00:15:27.857 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:27.857 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:15:27.857 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:15:27.857 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 75419 ']' 00:15:27.857 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 75419 00:15:27.858 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 75419 ']' 00:15:27.858 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 75419 00:15:27.858 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:15:27.858 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.858 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75419 00:15:27.858 killing process with pid 75419 00:15:27.858 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:27.858 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:27.858 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75419' 00:15:27.858 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 75419 00:15:27.858 13:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 75419 00:15:28.116 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:28.116 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:28.116 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:28.116 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:15:28.116 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:15:28.116 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:28.116 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:15:28.116 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:28.116 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:28.116 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:28.116 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:28.375 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:28.375 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:28.375 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:28.375 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:28.375 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:28.375 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:28.375 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:28.375 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:28.375 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:28.375 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:28.375 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:28.375 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:28.375 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.375 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:28.375 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.375 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:15:28.375 00:15:28.375 real 0m2.411s 00:15:28.375 user 0m4.859s 00:15:28.375 sys 0m0.780s 00:15:28.375 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.375 ************************************ 00:15:28.375 END TEST nvmf_identify 00:15:28.375 ************************************ 00:15:28.375 13:56:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.633 ************************************ 00:15:28.633 START TEST nvmf_perf 00:15:28.633 ************************************ 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:28.633 * Looking for test storage... 00:15:28.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.633 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:28.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.893 --rc genhtml_branch_coverage=1 00:15:28.893 --rc genhtml_function_coverage=1 00:15:28.893 --rc genhtml_legend=1 00:15:28.893 --rc geninfo_all_blocks=1 00:15:28.893 --rc geninfo_unexecuted_blocks=1 00:15:28.893 00:15:28.893 ' 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:28.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.893 --rc genhtml_branch_coverage=1 00:15:28.893 --rc genhtml_function_coverage=1 00:15:28.893 --rc genhtml_legend=1 00:15:28.893 --rc geninfo_all_blocks=1 00:15:28.893 --rc geninfo_unexecuted_blocks=1 00:15:28.893 00:15:28.893 ' 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:28.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.893 --rc genhtml_branch_coverage=1 00:15:28.893 --rc genhtml_function_coverage=1 00:15:28.893 --rc genhtml_legend=1 00:15:28.893 --rc geninfo_all_blocks=1 00:15:28.893 --rc geninfo_unexecuted_blocks=1 00:15:28.893 00:15:28.893 ' 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:28.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.893 --rc genhtml_branch_coverage=1 00:15:28.893 --rc genhtml_function_coverage=1 00:15:28.893 --rc genhtml_legend=1 00:15:28.893 --rc geninfo_all_blocks=1 00:15:28.893 --rc geninfo_unexecuted_blocks=1 00:15:28.893 00:15:28.893 ' 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:28.893 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:28.893 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:28.894 Cannot find device "nvmf_init_br" 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:28.894 Cannot find device "nvmf_init_br2" 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:28.894 Cannot find device "nvmf_tgt_br" 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:28.894 Cannot find device "nvmf_tgt_br2" 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:28.894 Cannot find device "nvmf_init_br" 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:28.894 Cannot find device "nvmf_init_br2" 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:28.894 Cannot find device "nvmf_tgt_br" 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:28.894 Cannot find device "nvmf_tgt_br2" 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:28.894 Cannot find device "nvmf_br" 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:28.894 Cannot find device "nvmf_init_if" 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:28.894 Cannot find device "nvmf_init_if2" 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:28.894 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:28.894 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:28.894 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:29.190 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:29.190 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:29.190 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:29.190 13:56:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:29.190 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:29.190 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:29.190 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:29.190 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:29.190 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:29.190 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:29.190 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:29.190 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:29.190 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:29.190 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:29.190 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:29.190 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:29.190 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:29.190 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:29.190 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:29.191 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:29.191 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:15:29.191 00:15:29.191 --- 10.0.0.3 ping statistics --- 00:15:29.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.191 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:29.191 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:29.191 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:15:29.191 00:15:29.191 --- 10.0.0.4 ping statistics --- 00:15:29.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.191 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:29.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:29.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:29.191 00:15:29.191 --- 10.0.0.1 ping statistics --- 00:15:29.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.191 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:29.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:29.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:15:29.191 00:15:29.191 --- 10.0.0.2 ping statistics --- 00:15:29.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.191 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=75670 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 75670 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 75670 ']' 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:29.191 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:29.492 [2024-12-11 13:56:22.221049] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:15:29.492 [2024-12-11 13:56:22.221314] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.492 [2024-12-11 13:56:22.363450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:29.492 [2024-12-11 13:56:22.426923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.492 [2024-12-11 13:56:22.426983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.492 [2024-12-11 13:56:22.426995] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.492 [2024-12-11 13:56:22.427004] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.492 [2024-12-11 13:56:22.427011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.492 [2024-12-11 13:56:22.428287] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.492 [2024-12-11 13:56:22.428461] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.492 [2024-12-11 13:56:22.428594] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.492 [2024-12-11 13:56:22.428594] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:29.492 [2024-12-11 13:56:22.483303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:29.751 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.751 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:15:29.751 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:29.751 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:29.751 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:29.751 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.751 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:29.751 13:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:30.009 13:56:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:30.009 13:56:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:30.577 13:56:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:30.577 13:56:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:30.835 13:56:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:30.835 13:56:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:30.835 13:56:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:30.835 13:56:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:30.835 13:56:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:31.094 [2024-12-11 13:56:23.918563] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.094 13:56:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:31.353 13:56:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:31.353 13:56:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:31.611 13:56:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:31.611 13:56:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:31.870 13:56:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:32.129 [2024-12-11 13:56:25.032133] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:32.129 13:56:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:32.387 13:56:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:32.387 13:56:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:32.387 13:56:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:32.387 13:56:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:33.763 Initializing NVMe Controllers 00:15:33.763 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:33.763 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:33.763 Initialization complete. Launching workers. 00:15:33.763 ======================================================== 00:15:33.763 Latency(us) 00:15:33.763 Device Information : IOPS MiB/s Average min max 00:15:33.763 PCIE (0000:00:10.0) NSID 1 from core 0: 22592.00 88.25 1415.93 365.74 8095.49 00:15:33.763 ======================================================== 00:15:33.763 Total : 22592.00 88.25 1415.93 365.74 8095.49 00:15:33.763 00:15:33.763 13:56:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:34.699 Initializing NVMe Controllers 00:15:34.699 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:34.699 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:34.699 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:34.699 Initialization complete. Launching workers. 00:15:34.699 ======================================================== 00:15:34.699 Latency(us) 00:15:34.699 Device Information : IOPS MiB/s Average min max 00:15:34.699 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3474.99 13.57 286.29 104.98 6313.44 00:15:34.699 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8144.68 6455.46 15048.42 00:15:34.699 ======================================================== 00:15:34.699 Total : 3598.99 14.06 557.04 104.98 15048.42 00:15:34.699 00:15:34.971 13:56:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:36.346 Initializing NVMe Controllers 00:15:36.346 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:36.346 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:36.346 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:36.346 Initialization complete. Launching workers. 00:15:36.346 ======================================================== 00:15:36.346 Latency(us) 00:15:36.346 Device Information : IOPS MiB/s Average min max 00:15:36.346 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8646.70 33.78 3700.92 568.48 8638.12 00:15:36.346 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4006.40 15.65 8000.57 6120.30 9483.35 00:15:36.346 ======================================================== 00:15:36.346 Total : 12653.10 49.43 5062.33 568.48 9483.35 00:15:36.346 00:15:36.346 13:56:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:36.346 13:56:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:38.875 Initializing NVMe Controllers 00:15:38.875 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:38.875 Controller IO queue size 128, less than required. 00:15:38.875 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:38.875 Controller IO queue size 128, less than required. 00:15:38.875 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:38.875 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:38.875 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:38.875 Initialization complete. Launching workers. 00:15:38.875 ======================================================== 00:15:38.875 Latency(us) 00:15:38.875 Device Information : IOPS MiB/s Average min max 00:15:38.875 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1694.30 423.58 77206.48 40080.38 120818.34 00:15:38.875 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 629.00 157.25 204670.31 58289.42 327541.69 00:15:38.875 ======================================================== 00:15:38.875 Total : 2323.30 580.83 111715.38 40080.38 327541.69 00:15:38.875 00:15:38.875 13:56:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:15:39.134 Initializing NVMe Controllers 00:15:39.134 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:39.134 Controller IO queue size 128, less than required. 00:15:39.134 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:39.134 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:39.134 Controller IO queue size 128, less than required. 00:15:39.134 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:39.134 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:39.134 WARNING: Some requested NVMe devices were skipped 00:15:39.134 No valid NVMe controllers or AIO or URING devices found 00:15:39.134 13:56:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:15:41.669 Initializing NVMe Controllers 00:15:41.669 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:41.669 Controller IO queue size 128, less than required. 00:15:41.669 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:41.669 Controller IO queue size 128, less than required. 00:15:41.669 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:41.669 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:41.669 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:41.669 Initialization complete. Launching workers. 00:15:41.669 00:15:41.669 ==================== 00:15:41.669 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:41.669 TCP transport: 00:15:41.669 polls: 9030 00:15:41.669 idle_polls: 4624 00:15:41.669 sock_completions: 4406 00:15:41.669 nvme_completions: 6447 00:15:41.669 submitted_requests: 9630 00:15:41.669 queued_requests: 1 00:15:41.669 00:15:41.669 ==================== 00:15:41.669 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:41.669 TCP transport: 00:15:41.669 polls: 9270 00:15:41.669 idle_polls: 5224 00:15:41.669 sock_completions: 4046 00:15:41.669 nvme_completions: 6533 00:15:41.669 submitted_requests: 9740 00:15:41.669 queued_requests: 1 00:15:41.669 ======================================================== 00:15:41.669 Latency(us) 00:15:41.669 Device Information : IOPS MiB/s Average min max 00:15:41.669 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1608.28 402.07 81027.44 48516.88 129528.25 00:15:41.669 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1629.74 407.43 79402.20 38636.79 117226.33 00:15:41.669 ======================================================== 00:15:41.669 Total : 3238.02 809.50 80209.43 38636.79 129528.25 00:15:41.669 00:15:41.669 13:56:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:41.669 13:56:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:41.928 13:56:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:41.928 13:56:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:41.928 13:56:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:41.928 13:56:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:41.928 13:56:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:15:41.928 13:56:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:41.928 13:56:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:15:41.928 13:56:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:41.928 13:56:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:41.928 rmmod nvme_tcp 00:15:42.187 rmmod nvme_fabrics 00:15:42.187 rmmod nvme_keyring 00:15:42.187 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:42.187 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:15:42.187 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:15:42.187 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 75670 ']' 00:15:42.187 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 75670 00:15:42.187 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 75670 ']' 00:15:42.187 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 75670 00:15:42.187 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:15:42.187 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.187 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75670 00:15:42.187 killing process with pid 75670 00:15:42.187 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.187 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.187 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75670' 00:15:42.187 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 75670 00:15:42.187 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 75670 00:15:42.754 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:42.754 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:42.754 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:42.754 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:15:42.754 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:15:42.754 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:42.754 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:15:42.754 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:42.754 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:42.754 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:42.754 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:43.013 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:43.013 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:43.013 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:43.013 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:43.013 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:43.013 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:43.013 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:43.013 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:43.013 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:43.013 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:43.013 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:43.013 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:43.013 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.013 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:43.013 13:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.013 13:56:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:15:43.013 ************************************ 00:15:43.013 END TEST nvmf_perf 00:15:43.013 ************************************ 00:15:43.013 00:15:43.013 real 0m14.566s 00:15:43.013 user 0m52.293s 00:15:43.013 sys 0m4.226s 00:15:43.013 13:56:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.013 13:56:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.273 ************************************ 00:15:43.273 START TEST nvmf_fio_host 00:15:43.273 ************************************ 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:43.273 * Looking for test storage... 00:15:43.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:43.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.273 --rc genhtml_branch_coverage=1 00:15:43.273 --rc genhtml_function_coverage=1 00:15:43.273 --rc genhtml_legend=1 00:15:43.273 --rc geninfo_all_blocks=1 00:15:43.273 --rc geninfo_unexecuted_blocks=1 00:15:43.273 00:15:43.273 ' 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:43.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.273 --rc genhtml_branch_coverage=1 00:15:43.273 --rc genhtml_function_coverage=1 00:15:43.273 --rc genhtml_legend=1 00:15:43.273 --rc geninfo_all_blocks=1 00:15:43.273 --rc geninfo_unexecuted_blocks=1 00:15:43.273 00:15:43.273 ' 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:43.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.273 --rc genhtml_branch_coverage=1 00:15:43.273 --rc genhtml_function_coverage=1 00:15:43.273 --rc genhtml_legend=1 00:15:43.273 --rc geninfo_all_blocks=1 00:15:43.273 --rc geninfo_unexecuted_blocks=1 00:15:43.273 00:15:43.273 ' 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:43.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.273 --rc genhtml_branch_coverage=1 00:15:43.273 --rc genhtml_function_coverage=1 00:15:43.273 --rc genhtml_legend=1 00:15:43.273 --rc geninfo_all_blocks=1 00:15:43.273 --rc geninfo_unexecuted_blocks=1 00:15:43.273 00:15:43.273 ' 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:43.273 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:43.274 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:43.274 Cannot find device "nvmf_init_br" 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:43.274 Cannot find device "nvmf_init_br2" 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:43.274 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:43.533 Cannot find device "nvmf_tgt_br" 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:43.533 Cannot find device "nvmf_tgt_br2" 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:43.533 Cannot find device "nvmf_init_br" 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:43.533 Cannot find device "nvmf_init_br2" 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:43.533 Cannot find device "nvmf_tgt_br" 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:43.533 Cannot find device "nvmf_tgt_br2" 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:43.533 Cannot find device "nvmf_br" 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:43.533 Cannot find device "nvmf_init_if" 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:43.533 Cannot find device "nvmf_init_if2" 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:43.533 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:43.533 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:43.533 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:43.792 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:43.792 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:43.793 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:43.793 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:15:43.793 00:15:43.793 --- 10.0.0.3 ping statistics --- 00:15:43.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.793 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:43.793 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:43.793 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:15:43.793 00:15:43.793 --- 10.0.0.4 ping statistics --- 00:15:43.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.793 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:43.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:43.793 00:15:43.793 --- 10.0.0.1 ping statistics --- 00:15:43.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.793 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:43.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:15:43.793 00:15:43.793 --- 10.0.0.2 ping statistics --- 00:15:43.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.793 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=76126 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 76126 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 76126 ']' 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:43.793 13:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.793 [2024-12-11 13:56:36.779513] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:15:43.793 [2024-12-11 13:56:36.779915] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.052 [2024-12-11 13:56:36.936569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:44.052 [2024-12-11 13:56:37.011006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.052 [2024-12-11 13:56:37.011276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.052 [2024-12-11 13:56:37.011495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.052 [2024-12-11 13:56:37.011516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.052 [2024-12-11 13:56:37.011527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.052 [2024-12-11 13:56:37.013060] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.052 [2024-12-11 13:56:37.013390] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:44.052 [2024-12-11 13:56:37.013397] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.052 [2024-12-11 13:56:37.013236] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.052 [2024-12-11 13:56:37.073936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:44.988 13:56:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:44.988 13:56:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:15:44.988 13:56:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:44.988 [2024-12-11 13:56:37.994577] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.988 13:56:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:44.988 13:56:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:44.988 13:56:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.247 13:56:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:45.506 Malloc1 00:15:45.506 13:56:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:45.764 13:56:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:46.023 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:46.282 [2024-12-11 13:56:39.266951] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:46.282 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:46.541 13:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:46.800 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:46.800 fio-3.35 00:15:46.800 Starting 1 thread 00:15:49.401 00:15:49.401 test: (groupid=0, jobs=1): err= 0: pid=76209: Wed Dec 11 13:56:42 2024 00:15:49.401 read: IOPS=8753, BW=34.2MiB/s (35.9MB/s)(68.6MiB/2007msec) 00:15:49.401 slat (usec): min=2, max=281, avg= 2.61, stdev= 2.88 00:15:49.401 clat (usec): min=1723, max=13751, avg=7609.25, stdev=513.83 00:15:49.401 lat (usec): min=1753, max=13753, avg=7611.85, stdev=513.57 00:15:49.402 clat percentiles (usec): 00:15:49.402 | 1.00th=[ 6587], 5.00th=[ 6915], 10.00th=[ 7046], 20.00th=[ 7242], 00:15:49.402 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7701], 00:15:49.402 | 70.00th=[ 7832], 80.00th=[ 7963], 90.00th=[ 8160], 95.00th=[ 8356], 00:15:49.402 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[11863], 99.95th=[12780], 00:15:49.402 | 99.99th=[13698] 00:15:49.402 bw ( KiB/s): min=34272, max=35640, per=100.00%, avg=35022.00, stdev=586.84, samples=4 00:15:49.402 iops : min= 8568, max= 8910, avg=8755.50, stdev=146.71, samples=4 00:15:49.402 write: IOPS=8760, BW=34.2MiB/s (35.9MB/s)(68.7MiB/2007msec); 0 zone resets 00:15:49.402 slat (usec): min=2, max=190, avg= 2.67, stdev= 1.72 00:15:49.402 clat (usec): min=1611, max=13526, avg=6940.19, stdev=476.63 00:15:49.402 lat (usec): min=1620, max=13529, avg=6942.86, stdev=476.54 00:15:49.402 clat percentiles (usec): 00:15:49.402 | 1.00th=[ 5997], 5.00th=[ 6325], 10.00th=[ 6456], 20.00th=[ 6652], 00:15:49.402 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 7046], 00:15:49.402 | 70.00th=[ 7111], 80.00th=[ 7242], 90.00th=[ 7439], 95.00th=[ 7570], 00:15:49.402 | 99.00th=[ 7898], 99.50th=[ 8029], 99.90th=[10945], 99.95th=[12780], 00:15:49.402 | 99.99th=[13435] 00:15:49.402 bw ( KiB/s): min=34816, max=35128, per=99.96%, avg=35026.00, stdev=141.82, samples=4 00:15:49.402 iops : min= 8704, max= 8782, avg=8756.50, stdev=35.45, samples=4 00:15:49.402 lat (msec) : 2=0.04%, 4=0.12%, 10=99.64%, 20=0.20% 00:15:49.402 cpu : usr=69.44%, sys=23.38%, ctx=19, majf=0, minf=6 00:15:49.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:49.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:49.402 issued rwts: total=17569,17582,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:49.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:49.402 00:15:49.402 Run status group 0 (all jobs): 00:15:49.402 READ: bw=34.2MiB/s (35.9MB/s), 34.2MiB/s-34.2MiB/s (35.9MB/s-35.9MB/s), io=68.6MiB (72.0MB), run=2007-2007msec 00:15:49.402 WRITE: bw=34.2MiB/s (35.9MB/s), 34.2MiB/s-34.2MiB/s (35.9MB/s-35.9MB/s), io=68.7MiB (72.0MB), run=2007-2007msec 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:49.402 13:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:49.402 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:49.402 fio-3.35 00:15:49.402 Starting 1 thread 00:15:51.934 00:15:51.934 test: (groupid=0, jobs=1): err= 0: pid=76252: Wed Dec 11 13:56:44 2024 00:15:51.934 read: IOPS=8129, BW=127MiB/s (133MB/s)(255MiB/2007msec) 00:15:51.934 slat (usec): min=3, max=129, avg= 3.84, stdev= 2.24 00:15:51.934 clat (usec): min=1946, max=17978, avg=8795.86, stdev=2838.07 00:15:51.934 lat (usec): min=1949, max=17981, avg=8799.70, stdev=2838.13 00:15:51.934 clat percentiles (usec): 00:15:51.934 | 1.00th=[ 4080], 5.00th=[ 4948], 10.00th=[ 5407], 20.00th=[ 6259], 00:15:51.934 | 30.00th=[ 6980], 40.00th=[ 7635], 50.00th=[ 8455], 60.00th=[ 9110], 00:15:51.934 | 70.00th=[10159], 80.00th=[11076], 90.00th=[12649], 95.00th=[14222], 00:15:51.934 | 99.00th=[16450], 99.50th=[17171], 99.90th=[17695], 99.95th=[17695], 00:15:51.934 | 99.99th=[17957] 00:15:51.934 bw ( KiB/s): min=59232, max=71872, per=50.37%, avg=65512.00, stdev=6757.91, samples=4 00:15:51.934 iops : min= 3702, max= 4492, avg=4094.50, stdev=422.37, samples=4 00:15:51.934 write: IOPS=4688, BW=73.3MiB/s (76.8MB/s)(134MiB/1829msec); 0 zone resets 00:15:51.934 slat (usec): min=33, max=378, avg=39.34, stdev= 8.77 00:15:51.934 clat (usec): min=3053, max=19706, avg=12401.30, stdev=2309.79 00:15:51.934 lat (usec): min=3089, max=19756, avg=12440.64, stdev=2310.30 00:15:51.934 clat percentiles (usec): 00:15:51.934 | 1.00th=[ 8094], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10421], 00:15:51.934 | 30.00th=[11076], 40.00th=[11600], 50.00th=[12125], 60.00th=[12649], 00:15:51.934 | 70.00th=[13435], 80.00th=[14353], 90.00th=[15533], 95.00th=[16712], 00:15:51.934 | 99.00th=[18220], 99.50th=[19006], 99.90th=[19530], 99.95th=[19530], 00:15:51.934 | 99.99th=[19792] 00:15:51.934 bw ( KiB/s): min=61056, max=74752, per=90.91%, avg=68192.00, stdev=7052.30, samples=4 00:15:51.934 iops : min= 3816, max= 4672, avg=4262.00, stdev=440.77, samples=4 00:15:51.934 lat (msec) : 2=0.01%, 4=0.56%, 10=48.75%, 20=50.68% 00:15:51.934 cpu : usr=80.91%, sys=14.81%, ctx=7, majf=0, minf=11 00:15:51.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:15:51.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:51.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:51.934 issued rwts: total=16316,8575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:51.934 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:51.934 00:15:51.934 Run status group 0 (all jobs): 00:15:51.934 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=255MiB (267MB), run=2007-2007msec 00:15:51.934 WRITE: bw=73.3MiB/s (76.8MB/s), 73.3MiB/s-73.3MiB/s (76.8MB/s-76.8MB/s), io=134MiB (140MB), run=1829-1829msec 00:15:51.934 13:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:51.934 13:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:51.934 13:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:51.934 13:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:51.934 13:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:51.934 13:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:51.934 13:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:15:51.934 13:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:51.934 13:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:15:51.934 13:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:51.934 13:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:51.934 rmmod nvme_tcp 00:15:51.934 rmmod nvme_fabrics 00:15:51.934 rmmod nvme_keyring 00:15:52.193 13:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:52.193 13:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:15:52.193 13:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:15:52.193 13:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 76126 ']' 00:15:52.193 13:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 76126 00:15:52.193 13:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 76126 ']' 00:15:52.193 13:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 76126 00:15:52.193 13:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:15:52.193 13:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.193 13:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76126 00:15:52.193 killing process with pid 76126 00:15:52.193 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.193 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.193 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76126' 00:15:52.193 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 76126 00:15:52.193 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 76126 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:52.451 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:52.708 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:52.709 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.709 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.709 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.709 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:15:52.709 ************************************ 00:15:52.709 END TEST nvmf_fio_host 00:15:52.709 ************************************ 00:15:52.709 00:15:52.709 real 0m9.481s 00:15:52.709 user 0m37.500s 00:15:52.709 sys 0m2.538s 00:15:52.709 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:52.709 13:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.709 13:56:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:52.709 13:56:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:52.709 13:56:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.709 13:56:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.709 ************************************ 00:15:52.709 START TEST nvmf_failover 00:15:52.709 ************************************ 00:15:52.709 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:52.709 * Looking for test storage... 00:15:52.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:52.709 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:52.709 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:15:52.709 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:52.967 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:52.967 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:52.967 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:52.967 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:52.967 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.967 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:15:52.967 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:15:52.967 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:15:52.967 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:15:52.967 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:15:52.967 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:52.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.968 --rc genhtml_branch_coverage=1 00:15:52.968 --rc genhtml_function_coverage=1 00:15:52.968 --rc genhtml_legend=1 00:15:52.968 --rc geninfo_all_blocks=1 00:15:52.968 --rc geninfo_unexecuted_blocks=1 00:15:52.968 00:15:52.968 ' 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:52.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.968 --rc genhtml_branch_coverage=1 00:15:52.968 --rc genhtml_function_coverage=1 00:15:52.968 --rc genhtml_legend=1 00:15:52.968 --rc geninfo_all_blocks=1 00:15:52.968 --rc geninfo_unexecuted_blocks=1 00:15:52.968 00:15:52.968 ' 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:52.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.968 --rc genhtml_branch_coverage=1 00:15:52.968 --rc genhtml_function_coverage=1 00:15:52.968 --rc genhtml_legend=1 00:15:52.968 --rc geninfo_all_blocks=1 00:15:52.968 --rc geninfo_unexecuted_blocks=1 00:15:52.968 00:15:52.968 ' 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:52.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.968 --rc genhtml_branch_coverage=1 00:15:52.968 --rc genhtml_function_coverage=1 00:15:52.968 --rc genhtml_legend=1 00:15:52.968 --rc geninfo_all_blocks=1 00:15:52.968 --rc geninfo_unexecuted_blocks=1 00:15:52.968 00:15:52.968 ' 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:52.968 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.968 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:52.969 Cannot find device "nvmf_init_br" 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:52.969 Cannot find device "nvmf_init_br2" 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:52.969 Cannot find device "nvmf_tgt_br" 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:52.969 Cannot find device "nvmf_tgt_br2" 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:52.969 Cannot find device "nvmf_init_br" 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:52.969 Cannot find device "nvmf_init_br2" 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:52.969 Cannot find device "nvmf_tgt_br" 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:52.969 Cannot find device "nvmf_tgt_br2" 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:52.969 Cannot find device "nvmf_br" 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:52.969 Cannot find device "nvmf_init_if" 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:52.969 Cannot find device "nvmf_init_if2" 00:15:52.969 13:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:15:52.969 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:52.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.969 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:15:52.969 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:52.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.969 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:15:52.969 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:53.227 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:53.227 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:15:53.227 00:15:53.227 --- 10.0.0.3 ping statistics --- 00:15:53.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.227 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:53.227 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:53.227 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:53.227 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:15:53.227 00:15:53.227 --- 10.0.0.4 ping statistics --- 00:15:53.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.227 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:53.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:53.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:53.486 00:15:53.486 --- 10.0.0.1 ping statistics --- 00:15:53.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.486 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:53.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:53.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:15:53.486 00:15:53.486 --- 10.0.0.2 ping statistics --- 00:15:53.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.486 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=76521 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 76521 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 76521 ']' 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:53.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.486 13:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:53.486 [2024-12-11 13:56:46.370569] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:15:53.486 [2024-12-11 13:56:46.370682] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.486 [2024-12-11 13:56:46.524926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:53.744 [2024-12-11 13:56:46.614373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.744 [2024-12-11 13:56:46.614739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.744 [2024-12-11 13:56:46.614911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.744 [2024-12-11 13:56:46.615215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.744 [2024-12-11 13:56:46.615259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.744 [2024-12-11 13:56:46.616825] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.744 [2024-12-11 13:56:46.616982] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:53.744 [2024-12-11 13:56:46.616996] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.744 [2024-12-11 13:56:46.695262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:54.677 13:56:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.677 13:56:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:54.677 13:56:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:54.677 13:56:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:54.677 13:56:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:54.677 13:56:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.677 13:56:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:54.936 [2024-12-11 13:56:47.766448] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.936 13:56:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:55.193 Malloc0 00:15:55.193 13:56:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:55.452 13:56:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:55.709 13:56:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:55.967 [2024-12-11 13:56:48.834961] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:55.968 13:56:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:56.225 [2024-12-11 13:56:49.139367] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:56.225 13:56:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:56.483 [2024-12-11 13:56:49.395804] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:56.483 13:56:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=76580 00:15:56.483 13:56:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:56.483 13:56:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:56.483 13:56:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 76580 /var/tmp/bdevperf.sock 00:15:56.483 13:56:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 76580 ']' 00:15:56.483 13:56:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:56.483 13:56:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.483 13:56:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:56.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:56.483 13:56:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.483 13:56:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:57.417 13:56:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.417 13:56:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:57.417 13:56:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:57.981 NVMe0n1 00:15:57.981 13:56:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:58.238 00:15:58.238 13:56:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=76609 00:15:58.238 13:56:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:58.238 13:56:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:59.186 13:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:59.443 13:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:16:02.723 13:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:02.981 00:16:02.981 13:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:03.239 [2024-12-11 13:56:56.156628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x982950 is same with the state(6) to be set 00:16:03.239 [2024-12-11 13:56:56.156753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x982950 is same with the state(6) to be set 00:16:03.239 [2024-12-11 13:56:56.156766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x982950 is same with the state(6) to be set 00:16:03.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:16:06.525 13:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:06.525 [2024-12-11 13:56:59.462946] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:06.525 13:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:16:07.461 13:57:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:08.027 13:57:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 76609 00:16:13.295 { 00:16:13.295 "results": [ 00:16:13.295 { 00:16:13.295 "job": "NVMe0n1", 00:16:13.295 "core_mask": "0x1", 00:16:13.295 "workload": "verify", 00:16:13.295 "status": "finished", 00:16:13.295 "verify_range": { 00:16:13.295 "start": 0, 00:16:13.295 "length": 16384 00:16:13.295 }, 00:16:13.295 "queue_depth": 128, 00:16:13.295 "io_size": 4096, 00:16:13.295 "runtime": 15.008412, 00:16:13.295 "iops": 8047.4869693076125, 00:16:13.295 "mibps": 31.43549597385786, 00:16:13.295 "io_failed": 2989, 00:16:13.295 "io_timeout": 0, 00:16:13.295 "avg_latency_us": 15489.038791472973, 00:16:13.295 "min_latency_us": 621.8472727272728, 00:16:13.295 "max_latency_us": 17515.985454545455 00:16:13.295 } 00:16:13.295 ], 00:16:13.295 "core_count": 1 00:16:13.295 } 00:16:13.295 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 76580 00:16:13.295 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 76580 ']' 00:16:13.295 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 76580 00:16:13.295 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:16:13.295 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.295 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76580 00:16:13.295 killing process with pid 76580 00:16:13.295 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.295 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.295 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76580' 00:16:13.295 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 76580 00:16:13.295 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 76580 00:16:13.567 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:13.567 [2024-12-11 13:56:49.465925] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:16:13.567 [2024-12-11 13:56:49.466026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76580 ] 00:16:13.567 [2024-12-11 13:56:49.615542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.567 [2024-12-11 13:56:49.681868] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.567 [2024-12-11 13:56:49.741038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:13.567 Running I/O for 15 seconds... 00:16:13.567 6800.00 IOPS, 26.56 MiB/s [2024-12-11T13:57:06.614Z] [2024-12-11 13:56:52.437806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.567 [2024-12-11 13:56:52.437887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.437932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.567 [2024-12-11 13:56:52.437949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.437965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.567 [2024-12-11 13:56:52.437979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.437994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.567 [2024-12-11 13:56:52.438007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.567 [2024-12-11 13:56:52.438036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.567 [2024-12-11 13:56:52.438064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.567 [2024-12-11 13:56:52.438092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.567 [2024-12-11 13:56:52.438121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.567 [2024-12-11 13:56:52.438149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.567 [2024-12-11 13:56:52.438177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.567 [2024-12-11 13:56:52.438260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.567 [2024-12-11 13:56:52.438291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.567 [2024-12-11 13:56:52.438320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.567 [2024-12-11 13:56:52.438349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.567 [2024-12-11 13:56:52.438377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.567 [2024-12-11 13:56:52.438406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.567 [2024-12-11 13:56:52.438443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.567 [2024-12-11 13:56:52.438474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.567 [2024-12-11 13:56:52.438503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.567 [2024-12-11 13:56:52.438532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.567 [2024-12-11 13:56:52.438560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.567 [2024-12-11 13:56:52.438604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.567 [2024-12-11 13:56:52.438632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.567 [2024-12-11 13:56:52.438669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.567 [2024-12-11 13:56:52.438697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.567 [2024-12-11 13:56:52.438712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.438725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.438740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.438769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.438785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.438799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.438814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.438827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.438842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.438856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.438870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.438884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.438898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.438912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.438942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.438959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.438975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.438992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.439023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.439051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.439090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.439147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.439176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.439205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.439234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.439264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.439293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.439322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.568 [2024-12-11 13:56:52.439352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.568 [2024-12-11 13:56:52.439381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.568 [2024-12-11 13:56:52.439410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.568 [2024-12-11 13:56:52.439453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.568 [2024-12-11 13:56:52.439489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.568 [2024-12-11 13:56:52.439517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.568 [2024-12-11 13:56:52.439545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.568 [2024-12-11 13:56:52.439573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.568 [2024-12-11 13:56:52.439609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.568 [2024-12-11 13:56:52.439637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.568 [2024-12-11 13:56:52.439665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.568 [2024-12-11 13:56:52.439693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.568 [2024-12-11 13:56:52.439721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.568 [2024-12-11 13:56:52.439765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.568 [2024-12-11 13:56:52.439794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.568 [2024-12-11 13:56:52.439839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.439868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.439905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.439935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.439965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.439981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.439995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.440010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.568 [2024-12-11 13:56:52.440023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.568 [2024-12-11 13:56:52.440038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.569 [2024-12-11 13:56:52.440291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.569 [2024-12-11 13:56:52.440321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.569 [2024-12-11 13:56:52.440350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.569 [2024-12-11 13:56:52.440378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.569 [2024-12-11 13:56:52.440408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.569 [2024-12-11 13:56:52.440437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.569 [2024-12-11 13:56:52.440466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.569 [2024-12-11 13:56:52.440495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.569 [2024-12-11 13:56:52.440932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.569 [2024-12-11 13:56:52.440961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.440976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.569 [2024-12-11 13:56:52.440990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.441005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.569 [2024-12-11 13:56:52.441020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.441042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.569 [2024-12-11 13:56:52.441056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.441071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.569 [2024-12-11 13:56:52.441085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.441100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.569 [2024-12-11 13:56:52.441114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.441129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.569 [2024-12-11 13:56:52.441143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.441159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.569 [2024-12-11 13:56:52.441173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.441188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.569 [2024-12-11 13:56:52.441202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.441218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.569 [2024-12-11 13:56:52.441232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.569 [2024-12-11 13:56:52.441247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.569 [2024-12-11 13:56:52.441262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.441277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.570 [2024-12-11 13:56:52.441290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.441306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.570 [2024-12-11 13:56:52.441319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.441334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.570 [2024-12-11 13:56:52.441348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.441363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.570 [2024-12-11 13:56:52.441384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.441399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.570 [2024-12-11 13:56:52.441420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.441436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.570 [2024-12-11 13:56:52.441450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.441466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.570 [2024-12-11 13:56:52.441480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.441495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.570 [2024-12-11 13:56:52.441509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.441524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.570 [2024-12-11 13:56:52.441538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.441552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.570 [2024-12-11 13:56:52.441566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.441581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.570 [2024-12-11 13:56:52.441595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.441611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.570 [2024-12-11 13:56:52.441625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.441640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b10ac0 is same with the state(6) to be set 00:16:13.570 [2024-12-11 13:56:52.441657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.570 [2024-12-11 13:56:52.441667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.570 [2024-12-11 13:56:52.441678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70648 len:8 PRP1 0x0 PRP2 0x0 00:16:13.570 [2024-12-11 13:56:52.441691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.441718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.570 [2024-12-11 13:56:52.441729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.570 [2024-12-11 13:56:52.441740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71168 len:8 PRP1 0x0 PRP2 0x0 00:16:13.570 [2024-12-11 13:56:52.441752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.441766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.570 [2024-12-11 13:56:52.441776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.570 [2024-12-11 13:56:52.441786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71176 len:8 PRP1 0x0 PRP2 0x0 00:16:13.570 [2024-12-11 13:56:52.441806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.441819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.570 [2024-12-11 13:56:52.441829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.570 [2024-12-11 13:56:52.441844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71184 len:8 PRP1 0x0 PRP2 0x0 00:16:13.570 [2024-12-11 13:56:52.441857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.441871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.570 [2024-12-11 13:56:52.441881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.570 [2024-12-11 13:56:52.441890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71192 len:8 PRP1 0x0 PRP2 0x0 00:16:13.570 [2024-12-11 13:56:52.441903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.441916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.570 [2024-12-11 13:56:52.441927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.570 [2024-12-11 13:56:52.441937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71200 len:8 PRP1 0x0 PRP2 0x0 00:16:13.570 [2024-12-11 13:56:52.441949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.441962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.570 [2024-12-11 13:56:52.441972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.570 [2024-12-11 13:56:52.441982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71208 len:8 PRP1 0x0 PRP2 0x0 00:16:13.570 [2024-12-11 13:56:52.441994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.442008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.570 [2024-12-11 13:56:52.442018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.570 [2024-12-11 13:56:52.442027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71216 len:8 PRP1 0x0 PRP2 0x0 00:16:13.570 [2024-12-11 13:56:52.442040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.442054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.570 [2024-12-11 13:56:52.442064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.570 [2024-12-11 13:56:52.442073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71224 len:8 PRP1 0x0 PRP2 0x0 00:16:13.570 [2024-12-11 13:56:52.442087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.442150] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:16:13.570 [2024-12-11 13:56:52.442227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:13.570 [2024-12-11 13:56:52.442250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.442265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:13.570 [2024-12-11 13:56:52.442279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.442305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:13.570 [2024-12-11 13:56:52.442319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.442333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:13.570 [2024-12-11 13:56:52.442346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:52.442360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:16:13.570 [2024-12-11 13:56:52.446128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:16:13.570 [2024-12-11 13:56:52.446168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa1c60 (9): Bad file descriptor 00:16:13.570 [2024-12-11 13:56:52.469700] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:16:13.570 7293.50 IOPS, 28.49 MiB/s [2024-12-11T13:57:06.617Z] 7485.33 IOPS, 29.24 MiB/s [2024-12-11T13:57:06.617Z] 7582.75 IOPS, 29.62 MiB/s [2024-12-11T13:57:06.617Z] [2024-12-11 13:56:56.154885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:13.570 [2024-12-11 13:56:56.154953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:56.154991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:13.570 [2024-12-11 13:56:56.155006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:56.155020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:13.570 [2024-12-11 13:56:56.155033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:56.155048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:13.570 [2024-12-11 13:56:56.155062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:56.155091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1c60 is same with the state(6) to be set 00:16:13.570 [2024-12-11 13:56:56.157264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.570 [2024-12-11 13:56:56.157295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.570 [2024-12-11 13:56:56.157320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.570 [2024-12-11 13:56:56.157342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.157358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.571 [2024-12-11 13:56:56.157373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.157389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.571 [2024-12-11 13:56:56.157404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.157443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.571 [2024-12-11 13:56:56.157458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.157473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.571 [2024-12-11 13:56:56.157487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.157502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.571 [2024-12-11 13:56:56.157517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.157532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.571 [2024-12-11 13:56:56.157545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.157560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.571 [2024-12-11 13:56:56.157574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.157589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.571 [2024-12-11 13:56:56.157603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.157618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.571 [2024-12-11 13:56:56.157631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.157646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.571 [2024-12-11 13:56:56.157660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.157675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.571 [2024-12-11 13:56:56.157688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.157703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.571 [2024-12-11 13:56:56.157717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.157751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.571 [2024-12-11 13:56:56.157767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.157782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.571 [2024-12-11 13:56:56.157796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.157811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.571 [2024-12-11 13:56:56.157835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.157852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.571 [2024-12-11 13:56:56.157866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.157881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.571 [2024-12-11 13:56:56.157896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.157911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.571 [2024-12-11 13:56:56.157925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.157940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.571 [2024-12-11 13:56:56.157955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.157970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.571 [2024-12-11 13:56:56.157984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.157999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.571 [2024-12-11 13:56:56.158013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.158028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.571 [2024-12-11 13:56:56.158042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.158057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.571 [2024-12-11 13:56:56.158070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.158085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.571 [2024-12-11 13:56:56.158099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.158114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.571 [2024-12-11 13:56:56.158127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.158142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.571 [2024-12-11 13:56:56.158156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.158171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.571 [2024-12-11 13:56:56.158185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.158200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.571 [2024-12-11 13:56:56.158220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.158236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.571 [2024-12-11 13:56:56.158249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.158265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.571 [2024-12-11 13:56:56.158279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.158294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.571 [2024-12-11 13:56:56.158307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.158322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.571 [2024-12-11 13:56:56.158335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.158351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.571 [2024-12-11 13:56:56.158365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.571 [2024-12-11 13:56:56.158380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.572 [2024-12-11 13:56:56.158394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.158409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.158423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.158438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.158452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.158467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.158480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.158495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.158509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.158524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.158537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.158552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.158583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.158606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.158621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.158636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.158650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.158666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.158680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.158695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.158710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.158738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.158753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.158769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.158783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.158799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.158812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.158828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.158842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.158858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.158874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.158890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.158904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.158920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.572 [2024-12-11 13:56:56.158934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.158949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.572 [2024-12-11 13:56:56.158963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.158978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.572 [2024-12-11 13:56:56.159000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.572 [2024-12-11 13:56:56.159031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.572 [2024-12-11 13:56:56.159060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.572 [2024-12-11 13:56:56.159089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.572 [2024-12-11 13:56:56.159150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.572 [2024-12-11 13:56:56.159181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.572 [2024-12-11 13:56:56.159212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.572 [2024-12-11 13:56:56.159242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.572 [2024-12-11 13:56:56.159272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.572 [2024-12-11 13:56:56.159303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:48232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.572 [2024-12-11 13:56:56.159334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.572 [2024-12-11 13:56:56.159365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.572 [2024-12-11 13:56:56.159397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.572 [2024-12-11 13:56:56.159451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.159481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.159511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.159542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.159572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.159601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.159631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.159660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.572 [2024-12-11 13:56:56.159690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.572 [2024-12-11 13:56:56.159706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.573 [2024-12-11 13:56:56.159720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.159747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.573 [2024-12-11 13:56:56.159763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.159778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.573 [2024-12-11 13:56:56.159793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.159809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:47904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.573 [2024-12-11 13:56:56.159823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.159845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.573 [2024-12-11 13:56:56.159860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.159876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.573 [2024-12-11 13:56:56.159890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.159907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.573 [2024-12-11 13:56:56.159922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.159937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.573 [2024-12-11 13:56:56.159952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.159968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.573 [2024-12-11 13:56:56.159982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.159997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.573 [2024-12-11 13:56:56.160011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.573 [2024-12-11 13:56:56.160041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.573 [2024-12-11 13:56:56.160071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.573 [2024-12-11 13:56:56.160100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.573 [2024-12-11 13:56:56.160130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.573 [2024-12-11 13:56:56.160160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.573 [2024-12-11 13:56:56.160189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.573 [2024-12-11 13:56:56.160226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.573 [2024-12-11 13:56:56.160256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:48344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.573 [2024-12-11 13:56:56.160285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.573 [2024-12-11 13:56:56.160315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.573 [2024-12-11 13:56:56.160344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.573 [2024-12-11 13:56:56.160374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.573 [2024-12-11 13:56:56.160413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.573 [2024-12-11 13:56:56.160443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.573 [2024-12-11 13:56:56.160472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.573 [2024-12-11 13:56:56.160501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.573 [2024-12-11 13:56:56.160531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.573 [2024-12-11 13:56:56.160561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.573 [2024-12-11 13:56:56.160590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.573 [2024-12-11 13:56:56.160630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.573 [2024-12-11 13:56:56.160661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b14c90 is same with the state(6) to be set 00:16:13.573 [2024-12-11 13:56:56.160693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.573 [2024-12-11 13:56:56.160733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.573 [2024-12-11 13:56:56.160746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48000 len:8 PRP1 0x0 PRP2 0x0 00:16:13.573 [2024-12-11 13:56:56.160759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.573 [2024-12-11 13:56:56.160785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.573 [2024-12-11 13:56:56.160796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48392 len:8 PRP1 0x0 PRP2 0x0 00:16:13.573 [2024-12-11 13:56:56.160810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.573 [2024-12-11 13:56:56.160834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.573 [2024-12-11 13:56:56.160844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48400 len:8 PRP1 0x0 PRP2 0x0 00:16:13.573 [2024-12-11 13:56:56.160858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.573 [2024-12-11 13:56:56.160887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.573 [2024-12-11 13:56:56.160898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48408 len:8 PRP1 0x0 PRP2 0x0 00:16:13.573 [2024-12-11 13:56:56.160911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.573 [2024-12-11 13:56:56.160935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.573 [2024-12-11 13:56:56.160946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48416 len:8 PRP1 0x0 PRP2 0x0 00:16:13.573 [2024-12-11 13:56:56.160959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.160973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.573 [2024-12-11 13:56:56.160984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.573 [2024-12-11 13:56:56.160994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48424 len:8 PRP1 0x0 PRP2 0x0 00:16:13.573 [2024-12-11 13:56:56.161008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.573 [2024-12-11 13:56:56.161022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.574 [2024-12-11 13:56:56.161039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.574 [2024-12-11 13:56:56.161051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48432 len:8 PRP1 0x0 PRP2 0x0 00:16:13.574 [2024-12-11 13:56:56.161065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:56:56.161080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.574 [2024-12-11 13:56:56.161090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.574 [2024-12-11 13:56:56.161110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48440 len:8 PRP1 0x0 PRP2 0x0 00:16:13.574 [2024-12-11 13:56:56.161124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:56:56.161138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.574 [2024-12-11 13:56:56.161148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.574 [2024-12-11 13:56:56.161159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48448 len:8 PRP1 0x0 PRP2 0x0 00:16:13.574 [2024-12-11 13:56:56.161173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:56:56.161187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.574 [2024-12-11 13:56:56.161198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.574 [2024-12-11 13:56:56.161208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48456 len:8 PRP1 0x0 PRP2 0x0 00:16:13.574 [2024-12-11 13:56:56.161222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:56:56.161235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.574 [2024-12-11 13:56:56.161246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.574 [2024-12-11 13:56:56.161256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48464 len:8 PRP1 0x0 PRP2 0x0 00:16:13.574 [2024-12-11 13:56:56.161270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:56:56.161284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.574 [2024-12-11 13:56:56.161299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.574 [2024-12-11 13:56:56.161310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48472 len:8 PRP1 0x0 PRP2 0x0 00:16:13.574 [2024-12-11 13:56:56.161323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:56:56.161337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.574 [2024-12-11 13:56:56.161348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.574 [2024-12-11 13:56:56.161358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48480 len:8 PRP1 0x0 PRP2 0x0 00:16:13.574 [2024-12-11 13:56:56.161372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:56:56.161385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.574 [2024-12-11 13:56:56.161395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.574 [2024-12-11 13:56:56.161406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48488 len:8 PRP1 0x0 PRP2 0x0 00:16:13.574 [2024-12-11 13:56:56.161419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:56:56.161439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.574 [2024-12-11 13:56:56.161450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.574 [2024-12-11 13:56:56.161461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48496 len:8 PRP1 0x0 PRP2 0x0 00:16:13.574 [2024-12-11 13:56:56.161475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:56:56.161488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.574 [2024-12-11 13:56:56.161499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.574 [2024-12-11 13:56:56.161514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48504 len:8 PRP1 0x0 PRP2 0x0 00:16:13.574 [2024-12-11 13:56:56.161528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:56:56.161557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.574 [2024-12-11 13:56:56.161567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.574 [2024-12-11 13:56:56.161577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48512 len:8 PRP1 0x0 PRP2 0x0 00:16:13.574 [2024-12-11 13:56:56.161591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:56:56.161605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.574 [2024-12-11 13:56:56.161615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.574 [2024-12-11 13:56:56.161626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48520 len:8 PRP1 0x0 PRP2 0x0 00:16:13.574 [2024-12-11 13:56:56.161639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:56:56.161652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.574 [2024-12-11 13:56:56.161662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.574 [2024-12-11 13:56:56.161673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48528 len:8 PRP1 0x0 PRP2 0x0 00:16:13.574 [2024-12-11 13:56:56.161686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:56:56.161699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.574 [2024-12-11 13:56:56.161714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.574 [2024-12-11 13:56:56.161724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48536 len:8 PRP1 0x0 PRP2 0x0 00:16:13.574 [2024-12-11 13:56:56.161752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:56:56.161767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.574 [2024-12-11 13:56:56.161778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.574 [2024-12-11 13:56:56.161788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48544 len:8 PRP1 0x0 PRP2 0x0 00:16:13.574 [2024-12-11 13:56:56.161801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:56:56.161867] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:16:13.574 [2024-12-11 13:56:56.161887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:16:13.574 [2024-12-11 13:56:56.165727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:16:13.574 [2024-12-11 13:56:56.165767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa1c60 (9): Bad file descriptor 00:16:13.574 [2024-12-11 13:56:56.191868] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:16:13.574 7578.60 IOPS, 29.60 MiB/s [2024-12-11T13:57:06.621Z] 7624.33 IOPS, 29.78 MiB/s [2024-12-11T13:57:06.621Z] 7658.29 IOPS, 29.92 MiB/s [2024-12-11T13:57:06.621Z] 7656.25 IOPS, 29.91 MiB/s [2024-12-11T13:57:06.621Z] 7642.89 IOPS, 29.86 MiB/s [2024-12-11T13:57:06.621Z] [2024-12-11 13:57:00.775413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:13.574 [2024-12-11 13:57:00.775508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:57:00.775547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:13.574 [2024-12-11 13:57:00.775562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:57:00.775576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:13.574 [2024-12-11 13:57:00.775588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:57:00.775602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:13.574 [2024-12-11 13:57:00.775615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:57:00.775628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1c60 is same with the state(6) to be set 00:16:13.574 [2024-12-11 13:57:00.776390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.574 [2024-12-11 13:57:00.776429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:57:00.776457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.574 [2024-12-11 13:57:00.776473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:57:00.776505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.574 [2024-12-11 13:57:00.776520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:57:00.776535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.574 [2024-12-11 13:57:00.776549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:57:00.776564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.574 [2024-12-11 13:57:00.776596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:57:00.776612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.574 [2024-12-11 13:57:00.776626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:57:00.776642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.574 [2024-12-11 13:57:00.776689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.574 [2024-12-11 13:57:00.776706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.574 [2024-12-11 13:57:00.776737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.776769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.575 [2024-12-11 13:57:00.776785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.776813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.575 [2024-12-11 13:57:00.776827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.776843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.575 [2024-12-11 13:57:00.776857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.776873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.575 [2024-12-11 13:57:00.776888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.776903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.575 [2024-12-11 13:57:00.776918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.776933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.776948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.776963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.776978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.776993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.777008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.777037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.777068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.777098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.777155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.777215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.777244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.777273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.777301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.777330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.777359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.777387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.777417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.777449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.777478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.777516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.777545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.777583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.575 [2024-12-11 13:57:00.777612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.575 [2024-12-11 13:57:00.777641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.575 [2024-12-11 13:57:00.777681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.575 [2024-12-11 13:57:00.777726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.575 [2024-12-11 13:57:00.777773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.575 [2024-12-11 13:57:00.777804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.575 [2024-12-11 13:57:00.777847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.575 [2024-12-11 13:57:00.777878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.777919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.777949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.777971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.777986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.778002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.778025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.778042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.778056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.575 [2024-12-11 13:57:00.778072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.575 [2024-12-11 13:57:00.778086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.778116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.778146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.778190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.778219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.778248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.778278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.778307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.778336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.778365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.778394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.778430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.778459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.576 [2024-12-11 13:57:00.778490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.576 [2024-12-11 13:57:00.778519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.576 [2024-12-11 13:57:00.778548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.576 [2024-12-11 13:57:00.778577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.576 [2024-12-11 13:57:00.778607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.576 [2024-12-11 13:57:00.778636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.576 [2024-12-11 13:57:00.778685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.576 [2024-12-11 13:57:00.778741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.778774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.778816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.778854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.778885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.778926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.778956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.778971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.778986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.779001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.779016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.779032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.779047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.779062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.779077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.779093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.779137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.779154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.779169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.779185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.779209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.779225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.779240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.779256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.779271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.779287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.779310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.779327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.779342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.779358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.779373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.576 [2024-12-11 13:57:00.779389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:13.576 [2024-12-11 13:57:00.779404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.779420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.577 [2024-12-11 13:57:00.779470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.779485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.577 [2024-12-11 13:57:00.779499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.779515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.577 [2024-12-11 13:57:00.779529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.779544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.577 [2024-12-11 13:57:00.779559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.779575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.577 [2024-12-11 13:57:00.779590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.779614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.577 [2024-12-11 13:57:00.779637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.779660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.577 [2024-12-11 13:57:00.779674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.779689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.577 [2024-12-11 13:57:00.779703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.779734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.577 [2024-12-11 13:57:00.779754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.779790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.577 [2024-12-11 13:57:00.779806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.779822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.577 [2024-12-11 13:57:00.779836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.779852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.577 [2024-12-11 13:57:00.779866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.779882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.577 [2024-12-11 13:57:00.779897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.779912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.577 [2024-12-11 13:57:00.779927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.779943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.577 [2024-12-11 13:57:00.779957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.779973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.577 [2024-12-11 13:57:00.779987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.780014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.577 [2024-12-11 13:57:00.780029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.780045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.577 [2024-12-11 13:57:00.780059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.780086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.577 [2024-12-11 13:57:00.780101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.780116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.577 [2024-12-11 13:57:00.780139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.780155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b14950 is same with the state(6) to be set 00:16:13.577 [2024-12-11 13:57:00.780173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.577 [2024-12-11 13:57:00.780184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.577 [2024-12-11 13:57:00.780195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75384 len:8 PRP1 0x0 PRP2 0x0 00:16:13.577 [2024-12-11 13:57:00.780216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.780239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.577 [2024-12-11 13:57:00.780250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.577 [2024-12-11 13:57:00.780260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75392 len:8 PRP1 0x0 PRP2 0x0 00:16:13.577 [2024-12-11 13:57:00.780280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.780294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.577 [2024-12-11 13:57:00.780304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.577 [2024-12-11 13:57:00.780315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75400 len:8 PRP1 0x0 PRP2 0x0 00:16:13.577 [2024-12-11 13:57:00.780333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.780347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.577 [2024-12-11 13:57:00.780358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.577 [2024-12-11 13:57:00.780368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75408 len:8 PRP1 0x0 PRP2 0x0 00:16:13.577 [2024-12-11 13:57:00.780382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.780406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.577 [2024-12-11 13:57:00.780417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.577 [2024-12-11 13:57:00.780428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75976 len:8 PRP1 0x0 PRP2 0x0 00:16:13.577 [2024-12-11 13:57:00.780451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.780464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.577 [2024-12-11 13:57:00.780484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.577 [2024-12-11 13:57:00.780495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75984 len:8 PRP1 0x0 PRP2 0x0 00:16:13.577 [2024-12-11 13:57:00.780508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.780522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.577 [2024-12-11 13:57:00.780533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.577 [2024-12-11 13:57:00.780543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75992 len:8 PRP1 0x0 PRP2 0x0 00:16:13.577 [2024-12-11 13:57:00.780557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.780588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.577 [2024-12-11 13:57:00.780599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.577 [2024-12-11 13:57:00.780610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76000 len:8 PRP1 0x0 PRP2 0x0 00:16:13.577 [2024-12-11 13:57:00.780624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.780637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.577 [2024-12-11 13:57:00.780666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.577 [2024-12-11 13:57:00.780677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76008 len:8 PRP1 0x0 PRP2 0x0 00:16:13.577 [2024-12-11 13:57:00.780691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.780705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.577 [2024-12-11 13:57:00.780725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.577 [2024-12-11 13:57:00.780748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76016 len:8 PRP1 0x0 PRP2 0x0 00:16:13.577 [2024-12-11 13:57:00.780762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.780779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.577 [2024-12-11 13:57:00.780790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.577 [2024-12-11 13:57:00.780800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76024 len:8 PRP1 0x0 PRP2 0x0 00:16:13.577 [2024-12-11 13:57:00.780813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.577 [2024-12-11 13:57:00.780837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.577 [2024-12-11 13:57:00.780852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.577 [2024-12-11 13:57:00.780863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76032 len:8 PRP1 0x0 PRP2 0x0 00:16:13.577 [2024-12-11 13:57:00.780877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.578 [2024-12-11 13:57:00.780907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.578 [2024-12-11 13:57:00.780918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.578 [2024-12-11 13:57:00.780928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76040 len:8 PRP1 0x0 PRP2 0x0 00:16:13.578 [2024-12-11 13:57:00.780942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.578 [2024-12-11 13:57:00.780956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.578 [2024-12-11 13:57:00.780966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.578 [2024-12-11 13:57:00.780977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76048 len:8 PRP1 0x0 PRP2 0x0 00:16:13.578 [2024-12-11 13:57:00.780996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.578 [2024-12-11 13:57:00.781010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.578 [2024-12-11 13:57:00.781020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.578 [2024-12-11 13:57:00.781031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75416 len:8 PRP1 0x0 PRP2 0x0 00:16:13.578 [2024-12-11 13:57:00.781044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.578 [2024-12-11 13:57:00.781059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.578 [2024-12-11 13:57:00.781070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.578 [2024-12-11 13:57:00.781082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75424 len:8 PRP1 0x0 PRP2 0x0 00:16:13.578 [2024-12-11 13:57:00.781095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.578 [2024-12-11 13:57:00.781116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.578 [2024-12-11 13:57:00.781128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.578 [2024-12-11 13:57:00.781153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75432 len:8 PRP1 0x0 PRP2 0x0 00:16:13.578 [2024-12-11 13:57:00.781167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.578 [2024-12-11 13:57:00.781180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.578 [2024-12-11 13:57:00.781190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.578 [2024-12-11 13:57:00.781211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75440 len:8 PRP1 0x0 PRP2 0x0 00:16:13.578 [2024-12-11 13:57:00.781235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.578 [2024-12-11 13:57:00.781259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.578 [2024-12-11 13:57:00.781270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.578 [2024-12-11 13:57:00.781292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75448 len:8 PRP1 0x0 PRP2 0x0 00:16:13.578 [2024-12-11 13:57:00.781305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.578 [2024-12-11 13:57:00.781319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.578 [2024-12-11 13:57:00.781340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.578 [2024-12-11 13:57:00.781351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75456 len:8 PRP1 0x0 PRP2 0x0 00:16:13.578 [2024-12-11 13:57:00.781364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.578 [2024-12-11 13:57:00.781394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.578 [2024-12-11 13:57:00.781404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.578 [2024-12-11 13:57:00.781415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75464 len:8 PRP1 0x0 PRP2 0x0 00:16:13.578 [2024-12-11 13:57:00.781428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.578 [2024-12-11 13:57:00.781442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:13.578 [2024-12-11 13:57:00.781452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:13.578 [2024-12-11 13:57:00.781462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75472 len:8 PRP1 0x0 PRP2 0x0 00:16:13.578 [2024-12-11 13:57:00.781476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.578 [2024-12-11 13:57:00.781541] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:16:13.578 [2024-12-11 13:57:00.781561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:16:13.578 [2024-12-11 13:57:00.785629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:16:13.578 [2024-12-11 13:57:00.785684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa1c60 (9): Bad file descriptor 00:16:13.578 [2024-12-11 13:57:00.807795] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:16:13.578 7646.10 IOPS, 29.87 MiB/s [2024-12-11T13:57:06.625Z] 7747.27 IOPS, 30.26 MiB/s [2024-12-11T13:57:06.625Z] 7846.33 IOPS, 30.65 MiB/s [2024-12-11T13:57:06.625Z] 7942.46 IOPS, 31.03 MiB/s [2024-12-11T13:57:06.625Z] 8013.43 IOPS, 31.30 MiB/s [2024-12-11T13:57:06.625Z] 8045.07 IOPS, 31.43 MiB/s 00:16:13.578 Latency(us) 00:16:13.578 [2024-12-11T13:57:06.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.578 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:13.578 Verification LBA range: start 0x0 length 0x4000 00:16:13.578 NVMe0n1 : 15.01 8047.49 31.44 199.15 0.00 15489.04 621.85 17515.99 00:16:13.578 [2024-12-11T13:57:06.625Z] =================================================================================================================== 00:16:13.578 [2024-12-11T13:57:06.625Z] Total : 8047.49 31.44 199.15 0.00 15489.04 621.85 17515.99 00:16:13.578 Received shutdown signal, test time was about 15.000000 seconds 00:16:13.578 00:16:13.578 Latency(us) 00:16:13.578 [2024-12-11T13:57:06.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.578 [2024-12-11T13:57:06.625Z] =================================================================================================================== 00:16:13.578 [2024-12-11T13:57:06.625Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:13.578 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:13.578 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:16:13.578 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:16:13.578 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=76783 00:16:13.578 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:13.578 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 76783 /var/tmp/bdevperf.sock 00:16:13.578 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 76783 ']' 00:16:13.578 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:13.578 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:13.578 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:13.578 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.578 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:14.145 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.145 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:16:14.145 13:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:14.402 [2024-12-11 13:57:07.247359] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:14.402 13:57:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:14.661 [2024-12-11 13:57:07.624024] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:16:14.661 13:57:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:14.919 NVMe0n1 00:16:15.178 13:57:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:15.436 00:16:15.436 13:57:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:15.694 00:16:15.694 13:57:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:15.694 13:57:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:16:15.952 13:57:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:16.210 13:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:16:19.555 13:57:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:19.555 13:57:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:16:19.555 13:57:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76857 00:16:19.555 13:57:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:19.555 13:57:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 76857 00:16:20.929 { 00:16:20.929 "results": [ 00:16:20.929 { 00:16:20.929 "job": "NVMe0n1", 00:16:20.929 "core_mask": "0x1", 00:16:20.929 "workload": "verify", 00:16:20.929 "status": "finished", 00:16:20.929 "verify_range": { 00:16:20.929 "start": 0, 00:16:20.929 "length": 16384 00:16:20.929 }, 00:16:20.929 "queue_depth": 128, 00:16:20.929 "io_size": 4096, 00:16:20.929 "runtime": 1.011819, 00:16:20.929 "iops": 7401.521418356445, 00:16:20.929 "mibps": 28.912193040454863, 00:16:20.929 "io_failed": 0, 00:16:20.929 "io_timeout": 0, 00:16:20.929 "avg_latency_us": 17195.433515823206, 00:16:20.929 "min_latency_us": 3142.7490909090907, 00:16:20.929 "max_latency_us": 17515.985454545455 00:16:20.929 } 00:16:20.929 ], 00:16:20.929 "core_count": 1 00:16:20.929 } 00:16:20.929 13:57:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:20.929 [2024-12-11 13:57:06.548482] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:16:20.929 [2024-12-11 13:57:06.548631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76783 ] 00:16:20.929 [2024-12-11 13:57:06.691514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.929 [2024-12-11 13:57:06.759969] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.929 [2024-12-11 13:57:06.818364] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:20.929 [2024-12-11 13:57:09.158946] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:16:20.929 [2024-12-11 13:57:09.159117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.929 [2024-12-11 13:57:09.159146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.929 [2024-12-11 13:57:09.159165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.929 [2024-12-11 13:57:09.159180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.929 [2024-12-11 13:57:09.159195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.929 [2024-12-11 13:57:09.159209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.929 [2024-12-11 13:57:09.159224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.929 [2024-12-11 13:57:09.159238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.929 [2024-12-11 13:57:09.159253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:16:20.929 [2024-12-11 13:57:09.159310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:16:20.929 [2024-12-11 13:57:09.159351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bfc60 (9): Bad file descriptor 00:16:20.929 [2024-12-11 13:57:09.163302] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:16:20.929 Running I/O for 1 seconds... 00:16:20.929 7347.00 IOPS, 28.70 MiB/s 00:16:20.929 Latency(us) 00:16:20.929 [2024-12-11T13:57:13.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.929 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:20.929 Verification LBA range: start 0x0 length 0x4000 00:16:20.929 NVMe0n1 : 1.01 7401.52 28.91 0.00 0.00 17195.43 3142.75 17515.99 00:16:20.929 [2024-12-11T13:57:13.976Z] =================================================================================================================== 00:16:20.929 [2024-12-11T13:57:13.976Z] Total : 7401.52 28.91 0.00 0.00 17195.43 3142.75 17515.99 00:16:20.929 13:57:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:20.929 13:57:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:16:20.929 13:57:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:21.496 13:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:21.496 13:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:16:21.755 13:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:22.014 13:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:16:25.303 13:57:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:25.303 13:57:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:16:25.303 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 76783 00:16:25.303 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 76783 ']' 00:16:25.303 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 76783 00:16:25.303 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:16:25.303 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.303 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76783 00:16:25.303 killing process with pid 76783 00:16:25.303 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:25.303 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:25.303 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76783' 00:16:25.303 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 76783 00:16:25.303 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 76783 00:16:25.561 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:16:25.561 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:25.819 rmmod nvme_tcp 00:16:25.819 rmmod nvme_fabrics 00:16:25.819 rmmod nvme_keyring 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 76521 ']' 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 76521 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 76521 ']' 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 76521 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76521 00:16:25.819 killing process with pid 76521 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76521' 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 76521 00:16:25.819 13:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 76521 00:16:26.078 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:26.078 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:26.078 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:26.078 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:16:26.078 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:16:26.078 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:26.078 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:16:26.078 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:26.078 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:26.078 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:26.078 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:16:26.337 ************************************ 00:16:26.337 END TEST nvmf_failover 00:16:26.337 ************************************ 00:16:26.337 00:16:26.337 real 0m33.711s 00:16:26.337 user 2m9.981s 00:16:26.337 sys 0m5.655s 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.337 ************************************ 00:16:26.337 START TEST nvmf_host_discovery 00:16:26.337 ************************************ 00:16:26.337 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:26.596 * Looking for test storage... 00:16:26.596 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:16:26.596 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:26.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.597 --rc genhtml_branch_coverage=1 00:16:26.597 --rc genhtml_function_coverage=1 00:16:26.597 --rc genhtml_legend=1 00:16:26.597 --rc geninfo_all_blocks=1 00:16:26.597 --rc geninfo_unexecuted_blocks=1 00:16:26.597 00:16:26.597 ' 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:26.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.597 --rc genhtml_branch_coverage=1 00:16:26.597 --rc genhtml_function_coverage=1 00:16:26.597 --rc genhtml_legend=1 00:16:26.597 --rc geninfo_all_blocks=1 00:16:26.597 --rc geninfo_unexecuted_blocks=1 00:16:26.597 00:16:26.597 ' 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:26.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.597 --rc genhtml_branch_coverage=1 00:16:26.597 --rc genhtml_function_coverage=1 00:16:26.597 --rc genhtml_legend=1 00:16:26.597 --rc geninfo_all_blocks=1 00:16:26.597 --rc geninfo_unexecuted_blocks=1 00:16:26.597 00:16:26.597 ' 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:26.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.597 --rc genhtml_branch_coverage=1 00:16:26.597 --rc genhtml_function_coverage=1 00:16:26.597 --rc genhtml_legend=1 00:16:26.597 --rc geninfo_all_blocks=1 00:16:26.597 --rc geninfo_unexecuted_blocks=1 00:16:26.597 00:16:26.597 ' 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:26.597 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:26.597 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.598 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:26.598 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:26.598 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:26.598 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:26.598 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:26.598 Cannot find device "nvmf_init_br" 00:16:26.598 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:16:26.598 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:26.598 Cannot find device "nvmf_init_br2" 00:16:26.598 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:16:26.598 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:26.856 Cannot find device "nvmf_tgt_br" 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:26.856 Cannot find device "nvmf_tgt_br2" 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:26.856 Cannot find device "nvmf_init_br" 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:26.856 Cannot find device "nvmf_init_br2" 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:26.856 Cannot find device "nvmf_tgt_br" 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:26.856 Cannot find device "nvmf_tgt_br2" 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:26.856 Cannot find device "nvmf_br" 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:26.856 Cannot find device "nvmf_init_if" 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:26.856 Cannot find device "nvmf_init_if2" 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:26.856 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:26.856 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:26.856 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:26.857 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:26.857 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:26.857 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:26.857 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:26.857 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:26.857 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:26.857 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:26.857 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:27.114 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:27.114 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:27.114 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:27.114 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:27.114 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:27.114 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:27.114 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:27.114 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:27.114 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:27.114 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:27.114 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:16:27.114 00:16:27.114 --- 10.0.0.3 ping statistics --- 00:16:27.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.115 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:27.115 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:27.115 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:16:27.115 00:16:27.115 --- 10.0.0.4 ping statistics --- 00:16:27.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.115 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:27.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:16:27.115 00:16:27.115 --- 10.0.0.1 ping statistics --- 00:16:27.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.115 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:27.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:16:27.115 00:16:27.115 --- 10.0.0.2 ping statistics --- 00:16:27.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.115 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=77182 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 77182 00:16:27.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 77182 ']' 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.115 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.115 [2024-12-11 13:57:20.051208] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:16:27.115 [2024-12-11 13:57:20.051621] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.372 [2024-12-11 13:57:20.198342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.372 [2024-12-11 13:57:20.277385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.372 [2024-12-11 13:57:20.277473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.372 [2024-12-11 13:57:20.277487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.372 [2024-12-11 13:57:20.277496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.372 [2024-12-11 13:57:20.277503] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.373 [2024-12-11 13:57:20.278048] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.373 [2024-12-11 13:57:20.350959] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:28.307 13:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.307 13:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:16:28.307 13:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:28.307 13:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:28.307 13:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.307 [2024-12-11 13:57:21.035724] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.307 [2024-12-11 13:57:21.043882] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.307 null0 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.307 null1 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.307 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=77214 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 77214 /tmp/host.sock 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 77214 ']' 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.307 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.307 [2024-12-11 13:57:21.121501] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:16:28.307 [2024-12-11 13:57:21.121796] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77214 ] 00:16:28.307 [2024-12-11 13:57:21.266495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.307 [2024-12-11 13:57:21.329682] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.566 [2024-12-11 13:57:21.384814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:28.566 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.825 [2024-12-11 13:57:21.836063] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:28.825 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.084 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.084 13:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.084 13:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:29.084 13:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:29.084 13:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:29.084 13:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:29.084 13:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:29.084 13:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:29.084 13:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:29.084 13:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.084 13:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.084 13:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:29.084 13:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:29.084 13:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:29.084 13:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.084 13:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:16:29.084 13:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:16:29.651 [2024-12-11 13:57:22.479389] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:29.651 [2024-12-11 13:57:22.479424] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:29.651 [2024-12-11 13:57:22.479449] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:29.651 [2024-12-11 13:57:22.485435] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:29.651 [2024-12-11 13:57:22.539895] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:16:29.651 [2024-12-11 13:57:22.541304] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xf7fdc0:1 started. 00:16:29.651 [2024-12-11 13:57:22.543368] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:29.651 [2024-12-11 13:57:22.543603] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:29.651 [2024-12-11 13:57:22.547855] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xf7fdc0 was disconnected and freed. delete nvme_qpair. 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:30.219 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:30.479 [2024-12-11 13:57:23.311728] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xf8e0b0:1 started. 00:16:30.479 [2024-12-11 13:57:23.318380] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xf8e0b0 was disconnected and freed. delete nvme_qpair. 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.479 [2024-12-11 13:57:23.421839] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:30.479 [2024-12-11 13:57:23.422378] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:30.479 [2024-12-11 13:57:23.422410] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:30.479 [2024-12-11 13:57:23.428341] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.479 [2024-12-11 13:57:23.492936] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:16:30.479 [2024-12-11 13:57:23.493006] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:30.479 [2024-12-11 13:57:23.493019] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:30.479 [2024-12-11 13:57:23.493025] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.479 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:30.739 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.740 [2024-12-11 13:57:23.679317] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:30.740 [2024-12-11 13:57:23.679483] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:30.740 [2024-12-11 13:57:23.680076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.740 [2024-12-11 13:57:23.680124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.740 [2024-12-11 13:57:23.680154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.740 [2024-12-11 13:57:23.680163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.740 [2024-12-11 13:57:23.680173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.740 [2024-12-11 13:57:23.680182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.740 [2024-12-11 13:57:23.680209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.740 [2024-12-11 13:57:23.680218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.740 [2024-12-11 13:57:23.680227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5bfb0 is same with the state(6) to be set 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:30.740 [2024-12-11 13:57:23.685310] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:16:30.740 [2024-12-11 13:57:23.685337] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:30.740 [2024-12-11 13:57:23.685396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf5bfb0 (9): Bad file descriptor 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:30.740 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:30.999 13:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:30.999 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:30.999 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:30.999 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.999 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.999 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:30.999 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:30.999 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.258 13:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:32.198 [2024-12-11 13:57:25.130253] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:32.198 [2024-12-11 13:57:25.130286] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:32.198 [2024-12-11 13:57:25.130305] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:32.198 [2024-12-11 13:57:25.136291] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:16:32.198 [2024-12-11 13:57:25.194775] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:16:32.198 [2024-12-11 13:57:25.195636] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xf7d1b0:1 started. 00:16:32.198 [2024-12-11 13:57:25.197994] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:32.198 [2024-12-11 13:57:25.198033] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:32.198 [2024-12-11 13:57:25.199504] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xf7d1b0 was disconnected and freed. delete nvme_qpair. 00:16:32.198 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.198 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:32.198 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:16:32.198 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:32.198 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:32.198 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:32.198 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:32.198 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:32.198 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:32.198 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.198 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:32.198 request: 00:16:32.199 { 00:16:32.199 "name": "nvme", 00:16:32.199 "trtype": "tcp", 00:16:32.199 "traddr": "10.0.0.3", 00:16:32.199 "adrfam": "ipv4", 00:16:32.199 "trsvcid": "8009", 00:16:32.199 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:32.199 "wait_for_attach": true, 00:16:32.199 "method": "bdev_nvme_start_discovery", 00:16:32.199 "req_id": 1 00:16:32.199 } 00:16:32.199 Got JSON-RPC error response 00:16:32.199 response: 00:16:32.199 { 00:16:32.199 "code": -17, 00:16:32.199 "message": "File exists" 00:16:32.199 } 00:16:32.199 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:32.199 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:16:32.199 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:32.199 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:32.199 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:32.199 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:32.199 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:32.199 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:32.199 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.199 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:32.199 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:32.199 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:32.199 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:32.459 request: 00:16:32.459 { 00:16:32.459 "name": "nvme_second", 00:16:32.459 "trtype": "tcp", 00:16:32.459 "traddr": "10.0.0.3", 00:16:32.459 "adrfam": "ipv4", 00:16:32.459 "trsvcid": "8009", 00:16:32.459 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:32.459 "wait_for_attach": true, 00:16:32.459 "method": "bdev_nvme_start_discovery", 00:16:32.459 "req_id": 1 00:16:32.459 } 00:16:32.459 Got JSON-RPC error response 00:16:32.459 response: 00:16:32.459 { 00:16:32.459 "code": -17, 00:16:32.459 "message": "File exists" 00:16:32.459 } 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.459 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.460 13:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:33.832 [2024-12-11 13:57:26.458441] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:33.832 [2024-12-11 13:57:26.458757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7f740 with addr=10.0.0.3, port=8010 00:16:33.832 [2024-12-11 13:57:26.458794] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:33.832 [2024-12-11 13:57:26.458806] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:33.832 [2024-12-11 13:57:26.458817] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:34.766 [2024-12-11 13:57:27.458420] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.766 [2024-12-11 13:57:27.458516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf807d0 with addr=10.0.0.3, port=8010 00:16:34.766 [2024-12-11 13:57:27.458540] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:34.766 [2024-12-11 13:57:27.458550] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:34.766 [2024-12-11 13:57:27.458559] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:35.702 [2024-12-11 13:57:28.458260] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:16:35.702 request: 00:16:35.702 { 00:16:35.702 "name": "nvme_second", 00:16:35.702 "trtype": "tcp", 00:16:35.702 "traddr": "10.0.0.3", 00:16:35.702 "adrfam": "ipv4", 00:16:35.702 "trsvcid": "8010", 00:16:35.702 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:35.702 "wait_for_attach": false, 00:16:35.702 "attach_timeout_ms": 3000, 00:16:35.702 "method": "bdev_nvme_start_discovery", 00:16:35.702 "req_id": 1 00:16:35.702 } 00:16:35.702 Got JSON-RPC error response 00:16:35.702 response: 00:16:35.702 { 00:16:35.702 "code": -110, 00:16:35.702 "message": "Connection timed out" 00:16:35.702 } 00:16:35.702 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:35.702 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:16:35.702 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:35.702 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:35.702 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:35.702 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:35.702 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:35.702 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.702 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 77214 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:35.703 rmmod nvme_tcp 00:16:35.703 rmmod nvme_fabrics 00:16:35.703 rmmod nvme_keyring 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 77182 ']' 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 77182 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 77182 ']' 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 77182 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77182 00:16:35.703 killing process with pid 77182 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77182' 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 77182 00:16:35.703 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 77182 00:16:35.962 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:35.962 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:35.962 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:35.962 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:16:35.962 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:16:35.962 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:35.962 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:16:35.962 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:35.962 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:35.962 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:35.962 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:35.962 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:35.962 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:35.962 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:35.962 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:36.221 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:36.221 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:36.221 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:36.221 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:36.221 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:36.221 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:36.221 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:36.221 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:36.221 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.221 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.221 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.221 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:16:36.221 ************************************ 00:16:36.221 END TEST nvmf_host_discovery 00:16:36.221 ************************************ 00:16:36.221 00:16:36.221 real 0m9.796s 00:16:36.221 user 0m18.102s 00:16:36.221 sys 0m1.991s 00:16:36.221 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:36.221 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:36.221 13:57:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:36.221 13:57:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:36.221 13:57:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:36.221 13:57:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.221 ************************************ 00:16:36.221 START TEST nvmf_host_multipath_status 00:16:36.221 ************************************ 00:16:36.221 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:36.481 * Looking for test storage... 00:16:36.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:36.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.481 --rc genhtml_branch_coverage=1 00:16:36.481 --rc genhtml_function_coverage=1 00:16:36.481 --rc genhtml_legend=1 00:16:36.481 --rc geninfo_all_blocks=1 00:16:36.481 --rc geninfo_unexecuted_blocks=1 00:16:36.481 00:16:36.481 ' 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:36.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.481 --rc genhtml_branch_coverage=1 00:16:36.481 --rc genhtml_function_coverage=1 00:16:36.481 --rc genhtml_legend=1 00:16:36.481 --rc geninfo_all_blocks=1 00:16:36.481 --rc geninfo_unexecuted_blocks=1 00:16:36.481 00:16:36.481 ' 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:36.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.481 --rc genhtml_branch_coverage=1 00:16:36.481 --rc genhtml_function_coverage=1 00:16:36.481 --rc genhtml_legend=1 00:16:36.481 --rc geninfo_all_blocks=1 00:16:36.481 --rc geninfo_unexecuted_blocks=1 00:16:36.481 00:16:36.481 ' 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:36.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.481 --rc genhtml_branch_coverage=1 00:16:36.481 --rc genhtml_function_coverage=1 00:16:36.481 --rc genhtml_legend=1 00:16:36.481 --rc geninfo_all_blocks=1 00:16:36.481 --rc geninfo_unexecuted_blocks=1 00:16:36.481 00:16:36.481 ' 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:16:36.481 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:36.482 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:36.482 Cannot find device "nvmf_init_br" 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:36.482 Cannot find device "nvmf_init_br2" 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:36.482 Cannot find device "nvmf_tgt_br" 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:36.482 Cannot find device "nvmf_tgt_br2" 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:36.482 Cannot find device "nvmf_init_br" 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:16:36.482 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:36.741 Cannot find device "nvmf_init_br2" 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:36.742 Cannot find device "nvmf_tgt_br" 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:36.742 Cannot find device "nvmf_tgt_br2" 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:36.742 Cannot find device "nvmf_br" 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:36.742 Cannot find device "nvmf_init_if" 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:36.742 Cannot find device "nvmf_init_if2" 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:36.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:36.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:36.742 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:37.002 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:37.002 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:16:37.002 00:16:37.002 --- 10.0.0.3 ping statistics --- 00:16:37.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.002 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:37.002 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:37.002 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:16:37.002 00:16:37.002 --- 10.0.0.4 ping statistics --- 00:16:37.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.002 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:37.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:37.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:37.002 00:16:37.002 --- 10.0.0.1 ping statistics --- 00:16:37.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.002 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:37.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:37.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:16:37.002 00:16:37.002 --- 10.0.0.2 ping statistics --- 00:16:37.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.002 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:37.002 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:37.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.003 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=77715 00:16:37.003 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:37.003 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 77715 00:16:37.003 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 77715 ']' 00:16:37.003 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.003 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.003 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.003 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.003 13:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:37.003 [2024-12-11 13:57:29.946675] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:16:37.003 [2024-12-11 13:57:29.946818] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.265 [2024-12-11 13:57:30.102335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:37.265 [2024-12-11 13:57:30.189287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.265 [2024-12-11 13:57:30.189640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.265 [2024-12-11 13:57:30.189839] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:37.265 [2024-12-11 13:57:30.190007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:37.265 [2024-12-11 13:57:30.190055] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.265 [2024-12-11 13:57:30.191856] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.266 [2024-12-11 13:57:30.191869] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.266 [2024-12-11 13:57:30.272569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:38.200 13:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:38.200 13:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:16:38.200 13:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:38.200 13:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:38.200 13:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:38.200 13:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.200 13:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=77715 00:16:38.200 13:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:38.200 [2024-12-11 13:57:31.237967] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:38.458 13:57:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:38.717 Malloc0 00:16:38.717 13:57:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:38.978 13:57:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:39.239 13:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:39.497 [2024-12-11 13:57:32.490664] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:39.497 13:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:39.756 [2024-12-11 13:57:32.747061] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:39.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:39.756 13:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=77771 00:16:39.756 13:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:39.756 13:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:39.756 13:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 77771 /var/tmp/bdevperf.sock 00:16:39.756 13:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 77771 ']' 00:16:39.756 13:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:39.756 13:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:39.756 13:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:39.756 13:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:39.756 13:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:40.323 13:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.323 13:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:16:40.323 13:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:40.581 13:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:40.839 Nvme0n1 00:16:40.839 13:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:41.405 Nvme0n1 00:16:41.405 13:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:41.405 13:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:43.306 13:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:43.306 13:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:43.564 13:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:43.823 13:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:44.759 13:57:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:44.759 13:57:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:44.759 13:57:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.759 13:57:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:45.326 13:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.326 13:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:45.326 13:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.326 13:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:45.585 13:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:45.585 13:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:45.585 13:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.585 13:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:45.843 13:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.843 13:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:45.843 13:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.843 13:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:46.101 13:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:46.101 13:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:46.101 13:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.101 13:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:46.360 13:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:46.360 13:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:46.618 13:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:46.618 13:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.876 13:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:46.876 13:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:46.876 13:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:47.134 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:47.393 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:48.814 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:48.814 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:48.814 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.814 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:48.814 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:48.814 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:48.814 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.814 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:49.073 13:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:49.073 13:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:49.073 13:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.073 13:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:49.331 13:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:49.331 13:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:49.331 13:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.331 13:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:49.589 13:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:49.589 13:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:49.589 13:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.589 13:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:49.848 13:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:49.848 13:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:49.848 13:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.848 13:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:50.106 13:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.106 13:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:50.106 13:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:50.672 13:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:50.930 13:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:51.864 13:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:51.864 13:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:51.864 13:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:51.864 13:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:52.122 13:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:52.122 13:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:52.122 13:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:52.122 13:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:52.379 13:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:52.379 13:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:52.379 13:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:52.379 13:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:52.637 13:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:52.637 13:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:52.637 13:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:52.637 13:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.253 13:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:53.253 13:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:53.253 13:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.253 13:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:53.511 13:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:53.511 13:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:53.511 13:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.511 13:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:53.770 13:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:53.770 13:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:53.770 13:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:54.028 13:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:54.286 13:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:55.221 13:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:55.221 13:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:55.221 13:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.221 13:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:55.479 13:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:55.479 13:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:55.479 13:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.479 13:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:55.737 13:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:55.737 13:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:55.737 13:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.737 13:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:55.996 13:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:55.996 13:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:55.996 13:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.996 13:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:56.254 13:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:56.254 13:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:56.254 13:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.254 13:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:56.821 13:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:56.821 13:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:56.821 13:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.821 13:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:57.080 13:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:57.080 13:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:57.080 13:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:57.354 13:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:57.613 13:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:58.557 13:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:58.557 13:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:58.557 13:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.557 13:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:58.815 13:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:58.815 13:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:58.815 13:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.815 13:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:59.074 13:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:59.074 13:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:59.074 13:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.074 13:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:59.333 13:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:59.333 13:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:59.333 13:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.333 13:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:59.591 13:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:59.591 13:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:59.591 13:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.591 13:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:59.850 13:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:59.850 13:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:59.850 13:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.850 13:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:00.109 13:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:00.109 13:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:17:00.109 13:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:00.367 13:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:00.626 13:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:17:02.003 13:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:17:02.003 13:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:02.003 13:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:02.003 13:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.003 13:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:02.003 13:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:02.003 13:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:02.003 13:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.301 13:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.301 13:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:02.301 13:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.301 13:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:02.559 13:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.559 13:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:02.559 13:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.559 13:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:02.817 13:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.817 13:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:02.817 13:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:02.817 13:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:03.075 13:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:03.075 13:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:03.075 13:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:03.075 13:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:03.333 13:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:03.333 13:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:17:03.591 13:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:17:03.591 13:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:17:03.849 13:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:04.107 13:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:17:05.483 13:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:17:05.483 13:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:05.483 13:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.483 13:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:05.483 13:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:05.483 13:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:05.483 13:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.483 13:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:05.741 13:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:05.741 13:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:05.741 13:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:05.741 13:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:06.000 13:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:06.000 13:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:06.000 13:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:06.000 13:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:06.570 13:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:06.570 13:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:06.570 13:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:06.570 13:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:06.845 13:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:06.845 13:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:06.845 13:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:06.845 13:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:07.103 13:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:07.103 13:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:17:07.103 13:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:07.362 13:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:07.620 13:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:17:08.562 13:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:17:08.562 13:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:08.562 13:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:08.562 13:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:08.821 13:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:08.821 13:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:08.821 13:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:08.821 13:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:09.079 13:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:09.079 13:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:09.079 13:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:09.079 13:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:09.339 13:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:09.339 13:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:09.339 13:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:09.339 13:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:09.907 13:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:09.907 13:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:09.907 13:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:09.907 13:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:09.907 13:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:09.907 13:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:09.907 13:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:09.907 13:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:10.474 13:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:10.474 13:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:17:10.474 13:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:10.474 13:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:17:10.733 13:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:17:12.108 13:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:17:12.109 13:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:12.109 13:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.109 13:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:12.109 13:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:12.109 13:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:12.109 13:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:12.109 13:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.366 13:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:12.366 13:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:12.366 13:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.366 13:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:12.624 13:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:12.624 13:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:12.624 13:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:12.624 13:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.883 13:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:12.883 13:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:12.883 13:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.883 13:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:13.141 13:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:13.141 13:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:13.141 13:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:13.141 13:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:13.708 13:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:13.708 13:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:17:13.708 13:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:13.708 13:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:14.275 13:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:17:15.238 13:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:17:15.238 13:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:15.238 13:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:15.238 13:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:15.495 13:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:15.495 13:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:15.496 13:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:15.496 13:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:15.754 13:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:15.754 13:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:15.754 13:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:15.754 13:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:16.012 13:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:16.012 13:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:16.012 13:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:16.012 13:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:16.580 13:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:16.580 13:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:16.580 13:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:16.580 13:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:16.839 13:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:16.839 13:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:16.839 13:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:16.839 13:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:17.097 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:17.097 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 77771 00:17:17.097 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 77771 ']' 00:17:17.097 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 77771 00:17:17.097 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:17:17.097 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:17.097 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77771 00:17:17.097 killing process with pid 77771 00:17:17.097 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:17.097 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:17.097 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77771' 00:17:17.097 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 77771 00:17:17.097 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 77771 00:17:17.097 { 00:17:17.097 "results": [ 00:17:17.097 { 00:17:17.097 "job": "Nvme0n1", 00:17:17.097 "core_mask": "0x4", 00:17:17.097 "workload": "verify", 00:17:17.097 "status": "terminated", 00:17:17.097 "verify_range": { 00:17:17.097 "start": 0, 00:17:17.097 "length": 16384 00:17:17.097 }, 00:17:17.097 "queue_depth": 128, 00:17:17.097 "io_size": 4096, 00:17:17.097 "runtime": 35.753503, 00:17:17.097 "iops": 7703.6926983070725, 00:17:17.097 "mibps": 30.092549602762002, 00:17:17.097 "io_failed": 0, 00:17:17.097 "io_timeout": 0, 00:17:17.097 "avg_latency_us": 16581.89697824326, 00:17:17.097 "min_latency_us": 269.96363636363634, 00:17:17.097 "max_latency_us": 4026531.84 00:17:17.097 } 00:17:17.097 ], 00:17:17.097 "core_count": 1 00:17:17.097 } 00:17:17.360 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 77771 00:17:17.360 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:17.360 [2024-12-11 13:57:32.846961] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:17:17.360 [2024-12-11 13:57:32.847163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77771 ] 00:17:17.360 [2024-12-11 13:57:33.006333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.360 [2024-12-11 13:57:33.090180] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.360 [2024-12-11 13:57:33.167964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:17.360 Running I/O for 90 seconds... 00:17:17.360 8136.00 IOPS, 31.78 MiB/s [2024-12-11T13:58:10.407Z] 8348.00 IOPS, 32.61 MiB/s [2024-12-11T13:58:10.407Z] 8584.00 IOPS, 33.53 MiB/s [2024-12-11T13:58:10.407Z] 8622.00 IOPS, 33.68 MiB/s [2024-12-11T13:58:10.407Z] 8727.20 IOPS, 34.09 MiB/s [2024-12-11T13:58:10.407Z] 8727.17 IOPS, 34.09 MiB/s [2024-12-11T13:58:10.407Z] 8763.00 IOPS, 34.23 MiB/s [2024-12-11T13:58:10.407Z] 8794.88 IOPS, 34.35 MiB/s [2024-12-11T13:58:10.407Z] 8808.78 IOPS, 34.41 MiB/s [2024-12-11T13:58:10.407Z] 8804.70 IOPS, 34.39 MiB/s [2024-12-11T13:58:10.407Z] 8828.18 IOPS, 34.49 MiB/s [2024-12-11T13:58:10.407Z] 8842.50 IOPS, 34.54 MiB/s [2024-12-11T13:58:10.407Z] 8851.23 IOPS, 34.58 MiB/s [2024-12-11T13:58:10.407Z] 8858.36 IOPS, 34.60 MiB/s [2024-12-11T13:58:10.407Z] 8821.93 IOPS, 34.46 MiB/s [2024-12-11T13:58:10.407Z] [2024-12-11 13:57:50.111676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.360 [2024-12-11 13:57:50.111804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:17.360 [2024-12-11 13:57:50.111866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.360 [2024-12-11 13:57:50.111890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:17.360 [2024-12-11 13:57:50.111913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.360 [2024-12-11 13:57:50.111929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:17.360 [2024-12-11 13:57:50.111951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.360 [2024-12-11 13:57:50.111966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:17.360 [2024-12-11 13:57:50.111988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.360 [2024-12-11 13:57:50.112004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:17.360 [2024-12-11 13:57:50.112026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.360 [2024-12-11 13:57:50.112041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:17.360 [2024-12-11 13:57:50.112063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.360 [2024-12-11 13:57:50.112078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:17.360 [2024-12-11 13:57:50.112100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.360 [2024-12-11 13:57:50.112115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:17.360 [2024-12-11 13:57:50.112152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.360 [2024-12-11 13:57:50.112167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:17.360 [2024-12-11 13:57:50.112219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.360 [2024-12-11 13:57:50.112252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:17.360 [2024-12-11 13:57:50.112274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.360 [2024-12-11 13:57:50.112289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:17.360 [2024-12-11 13:57:50.112310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.112325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.112347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.112362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.112383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.112398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.112418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.112433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.112455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.112469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.112490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.112505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.112530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.112545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.112567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.112582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.112604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.112619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.112640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.112655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.112687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.112704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.112726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.112756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.112779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.112795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.112823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.361 [2024-12-11 13:57:50.112841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.112863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.361 [2024-12-11 13:57:50.112879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.112900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.361 [2024-12-11 13:57:50.112916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.112937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.361 [2024-12-11 13:57:50.112953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.112974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.361 [2024-12-11 13:57:50.113000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.361 [2024-12-11 13:57:50.113036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.361 [2024-12-11 13:57:50.113073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.361 [2024-12-11 13:57:50.113110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.361 [2024-12-11 13:57:50.113147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.361 [2024-12-11 13:57:50.113196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.361 [2024-12-11 13:57:50.113235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.361 [2024-12-11 13:57:50.113272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.361 [2024-12-11 13:57:50.113309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.361 [2024-12-11 13:57:50.113346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.361 [2024-12-11 13:57:50.113383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.361 [2024-12-11 13:57:50.113420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.113458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.113494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.113532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.113571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.113608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.113652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.113691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.113741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.113780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.113819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:17.361 [2024-12-11 13:57:50.113841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.361 [2024-12-11 13:57:50.113856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.113879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.362 [2024-12-11 13:57:50.113894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.113916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.362 [2024-12-11 13:57:50.113931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.113953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.362 [2024-12-11 13:57:50.113969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.113990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.362 [2024-12-11 13:57:50.114006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.362 [2024-12-11 13:57:50.114043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.114086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.114131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.114169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.114207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.114244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.114282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.114319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.114355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.114393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.114432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.114470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.114507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.114544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.114581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.114626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.114662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.362 [2024-12-11 13:57:50.114714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.362 [2024-12-11 13:57:50.114756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.362 [2024-12-11 13:57:50.114793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.362 [2024-12-11 13:57:50.114845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.362 [2024-12-11 13:57:50.114880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.362 [2024-12-11 13:57:50.114916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.362 [2024-12-11 13:57:50.114952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.114974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.362 [2024-12-11 13:57:50.114995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.115047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.115067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.115089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.115134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.115169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.115193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.115215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.115231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.115253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.115268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.115289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.115305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.115327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.115342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.115363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.362 [2024-12-11 13:57:50.115379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.115400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.362 [2024-12-11 13:57:50.115416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.115437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.362 [2024-12-11 13:57:50.115453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:17.362 [2024-12-11 13:57:50.115474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.363 [2024-12-11 13:57:50.115490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.115527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.363 [2024-12-11 13:57:50.115541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.115563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.363 [2024-12-11 13:57:50.115578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.115598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.363 [2024-12-11 13:57:50.115613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.115634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.363 [2024-12-11 13:57:50.115656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.115678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.363 [2024-12-11 13:57:50.115698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.115721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.363 [2024-12-11 13:57:50.115748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.115787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.363 [2024-12-11 13:57:50.115803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.115825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.363 [2024-12-11 13:57:50.115840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.115862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.363 [2024-12-11 13:57:50.115877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.115899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.363 [2024-12-11 13:57:50.115914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.115936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.363 [2024-12-11 13:57:50.115952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.115974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.363 [2024-12-11 13:57:50.115989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.116714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.363 [2024-12-11 13:57:50.116773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.116808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.363 [2024-12-11 13:57:50.116826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.116854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.363 [2024-12-11 13:57:50.116870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.116898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.363 [2024-12-11 13:57:50.116926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.116955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.363 [2024-12-11 13:57:50.116971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.116999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.363 [2024-12-11 13:57:50.117016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.117044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.363 [2024-12-11 13:57:50.117059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.117087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.363 [2024-12-11 13:57:50.117103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.117146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.363 [2024-12-11 13:57:50.117172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.117203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.363 [2024-12-11 13:57:50.117219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.117247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.363 [2024-12-11 13:57:50.117263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.117291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.363 [2024-12-11 13:57:50.117306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.117334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.363 [2024-12-11 13:57:50.117350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.117378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.363 [2024-12-11 13:57:50.117393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.117421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.363 [2024-12-11 13:57:50.117437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.117465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.363 [2024-12-11 13:57:50.117481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.117523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.363 [2024-12-11 13:57:50.117541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.117569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.363 [2024-12-11 13:57:50.117585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.117613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.363 [2024-12-11 13:57:50.117629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.117657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.363 [2024-12-11 13:57:50.117673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.117700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.363 [2024-12-11 13:57:50.117728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.117760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.363 [2024-12-11 13:57:50.117776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:17.363 [2024-12-11 13:57:50.117805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.364 [2024-12-11 13:57:50.117820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:57:50.117849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.364 [2024-12-11 13:57:50.117864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:57:50.117906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.364 [2024-12-11 13:57:50.117931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:17.364 8677.00 IOPS, 33.89 MiB/s [2024-12-11T13:58:10.411Z] 8166.59 IOPS, 31.90 MiB/s [2024-12-11T13:58:10.411Z] 7712.89 IOPS, 30.13 MiB/s [2024-12-11T13:58:10.411Z] 7306.95 IOPS, 28.54 MiB/s [2024-12-11T13:58:10.411Z] 7034.80 IOPS, 27.48 MiB/s [2024-12-11T13:58:10.411Z] 7072.00 IOPS, 27.62 MiB/s [2024-12-11T13:58:10.411Z] 7119.82 IOPS, 27.81 MiB/s [2024-12-11T13:58:10.411Z] 7186.22 IOPS, 28.07 MiB/s [2024-12-11T13:58:10.411Z] 7273.67 IOPS, 28.41 MiB/s [2024-12-11T13:58:10.411Z] 7353.92 IOPS, 28.73 MiB/s [2024-12-11T13:58:10.411Z] 7425.38 IOPS, 29.01 MiB/s [2024-12-11T13:58:10.411Z] 7444.89 IOPS, 29.08 MiB/s [2024-12-11T13:58:10.411Z] 7455.82 IOPS, 29.12 MiB/s [2024-12-11T13:58:10.411Z] 7475.79 IOPS, 29.20 MiB/s [2024-12-11T13:58:10.411Z] 7521.57 IOPS, 29.38 MiB/s [2024-12-11T13:58:10.411Z] 7579.55 IOPS, 29.61 MiB/s [2024-12-11T13:58:10.411Z] 7630.16 IOPS, 29.81 MiB/s [2024-12-11T13:58:10.411Z] [2024-12-11 13:58:07.025926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.364 [2024-12-11 13:58:07.026005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.026078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.364 [2024-12-11 13:58:07.026134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.026159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.364 [2024-12-11 13:58:07.026174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.026228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.364 [2024-12-11 13:58:07.026243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.026265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.364 [2024-12-11 13:58:07.026280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.026301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.364 [2024-12-11 13:58:07.026316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.026337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.364 [2024-12-11 13:58:07.026352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.026373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.364 [2024-12-11 13:58:07.026387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.026409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.364 [2024-12-11 13:58:07.026425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.026446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.364 [2024-12-11 13:58:07.026472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.026493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.364 [2024-12-11 13:58:07.026507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.026529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.364 [2024-12-11 13:58:07.026543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.026579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.364 [2024-12-11 13:58:07.026609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.026630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.364 [2024-12-11 13:58:07.026653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.026675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.364 [2024-12-11 13:58:07.026689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.026709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.364 [2024-12-11 13:58:07.026743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.026764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.364 [2024-12-11 13:58:07.026779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.026857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.364 [2024-12-11 13:58:07.026879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.026910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.364 [2024-12-11 13:58:07.026925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.026947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.364 [2024-12-11 13:58:07.026962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.026982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.364 [2024-12-11 13:58:07.026997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.027018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.364 [2024-12-11 13:58:07.027033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.027054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.364 [2024-12-11 13:58:07.027069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.027089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.364 [2024-12-11 13:58:07.027134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.027157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.364 [2024-12-11 13:58:07.027172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.027194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.364 [2024-12-11 13:58:07.027209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.027243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.364 [2024-12-11 13:58:07.027260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.027281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:36232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.364 [2024-12-11 13:58:07.027297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.027318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:36248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.364 [2024-12-11 13:58:07.027333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.027355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.364 [2024-12-11 13:58:07.027371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.027392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.364 [2024-12-11 13:58:07.027407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.027443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.364 [2024-12-11 13:58:07.027458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.364 [2024-12-11 13:58:07.027479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.365 [2024-12-11 13:58:07.027494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.027515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.365 [2024-12-11 13:58:07.027530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.027550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.365 [2024-12-11 13:58:07.027565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.027586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.365 [2024-12-11 13:58:07.027601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.027631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:36256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.365 [2024-12-11 13:58:07.027647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.027669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.365 [2024-12-11 13:58:07.027683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.027712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.365 [2024-12-11 13:58:07.027738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.027777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.365 [2024-12-11 13:58:07.027792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.027812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.365 [2024-12-11 13:58:07.027832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.027852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.365 [2024-12-11 13:58:07.027866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.027886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.365 [2024-12-11 13:58:07.027900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.027920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.365 [2024-12-11 13:58:07.027934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.027954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.365 [2024-12-11 13:58:07.027968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.027989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.365 [2024-12-11 13:58:07.028003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.028023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.365 [2024-12-11 13:58:07.028037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.029411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:36384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.365 [2024-12-11 13:58:07.029440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.029467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:36400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.365 [2024-12-11 13:58:07.029483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.029504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.365 [2024-12-11 13:58:07.029519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.029539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.365 [2024-12-11 13:58:07.029565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.029587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.365 [2024-12-11 13:58:07.029602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.029622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.365 [2024-12-11 13:58:07.029636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.029663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.365 [2024-12-11 13:58:07.029678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.029698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.365 [2024-12-11 13:58:07.029712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.029746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.365 [2024-12-11 13:58:07.029764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.029785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.365 [2024-12-11 13:58:07.029799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.029819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.365 [2024-12-11 13:58:07.029833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.029853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.365 [2024-12-11 13:58:07.029867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.029888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.365 [2024-12-11 13:58:07.029902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.029923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:36480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.365 [2024-12-11 13:58:07.029937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.029958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.365 [2024-12-11 13:58:07.029972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.029996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.365 [2024-12-11 13:58:07.030021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.030043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:36528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.365 [2024-12-11 13:58:07.030057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.030078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.365 [2024-12-11 13:58:07.030092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.030113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:36552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.365 [2024-12-11 13:58:07.030127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.030147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.365 [2024-12-11 13:58:07.030161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:17.365 [2024-12-11 13:58:07.030182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.366 [2024-12-11 13:58:07.030222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:17.366 [2024-12-11 13:58:07.030242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.366 [2024-12-11 13:58:07.030257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:17.366 [2024-12-11 13:58:07.030278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.366 [2024-12-11 13:58:07.030292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:17.366 [2024-12-11 13:58:07.030324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.366 [2024-12-11 13:58:07.030339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:17.366 [2024-12-11 13:58:07.030360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.366 [2024-12-11 13:58:07.030374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:17.366 [2024-12-11 13:58:07.030395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.366 [2024-12-11 13:58:07.030409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:17.366 [2024-12-11 13:58:07.030430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.366 [2024-12-11 13:58:07.030445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:17.366 [2024-12-11 13:58:07.030465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.366 [2024-12-11 13:58:07.030480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:17.366 [2024-12-11 13:58:07.030528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.366 [2024-12-11 13:58:07.030549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:17.366 [2024-12-11 13:58:07.030572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.366 [2024-12-11 13:58:07.030587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:17.366 [2024-12-11 13:58:07.030623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.366 [2024-12-11 13:58:07.030637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:17.366 [2024-12-11 13:58:07.030657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.366 [2024-12-11 13:58:07.030672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:17.366 [2024-12-11 13:58:07.030692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.366 [2024-12-11 13:58:07.030706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:17.366 [2024-12-11 13:58:07.030726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.366 [2024-12-11 13:58:07.030752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:17.366 [2024-12-11 13:58:07.030777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.366 [2024-12-11 13:58:07.030792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:17.366 [2024-12-11 13:58:07.030812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.366 [2024-12-11 13:58:07.030833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:17.366 [2024-12-11 13:58:07.030853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.366 [2024-12-11 13:58:07.030867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:17.366 [2024-12-11 13:58:07.030887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:36736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.366 [2024-12-11 13:58:07.030902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:17.366 [2024-12-11 13:58:07.030923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.366 [2024-12-11 13:58:07.030937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:17.366 7665.06 IOPS, 29.94 MiB/s [2024-12-11T13:58:10.413Z] 7679.62 IOPS, 30.00 MiB/s [2024-12-11T13:58:10.413Z] 7694.46 IOPS, 30.06 MiB/s [2024-12-11T13:58:10.413Z] Received shutdown signal, test time was about 35.754321 seconds 00:17:17.366 00:17:17.366 Latency(us) 00:17:17.366 [2024-12-11T13:58:10.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.366 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:17.366 Verification LBA range: start 0x0 length 0x4000 00:17:17.366 Nvme0n1 : 35.75 7703.69 30.09 0.00 0.00 16581.90 269.96 4026531.84 00:17:17.366 [2024-12-11T13:58:10.413Z] =================================================================================================================== 00:17:17.366 [2024-12-11T13:58:10.413Z] Total : 7703.69 30.09 0.00 0.00 16581.90 269.96 4026531.84 00:17:17.366 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:17.625 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:17:17.625 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:17.625 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:17:17.625 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:17.625 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:17:17.883 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:17.883 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:17:17.883 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:17.883 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:17.883 rmmod nvme_tcp 00:17:17.883 rmmod nvme_fabrics 00:17:17.883 rmmod nvme_keyring 00:17:17.883 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:17.883 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:17:17.883 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:17:17.883 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 77715 ']' 00:17:17.883 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 77715 00:17:17.883 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 77715 ']' 00:17:17.883 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 77715 00:17:17.883 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:17:17.883 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:17.883 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77715 00:17:17.883 killing process with pid 77715 00:17:17.883 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:17.883 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:17.883 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77715' 00:17:17.883 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 77715 00:17:17.883 13:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 77715 00:17:18.142 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:18.142 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:18.142 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:18.142 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:17:18.142 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:17:18.142 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:17:18.142 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:18.142 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:18.142 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:18.142 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:18.142 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:18.142 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:18.142 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:18.142 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:18.142 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:18.142 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:18.142 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:18.142 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:18.142 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:18.142 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:18.401 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:18.401 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:18.401 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:18.401 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.401 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:18.401 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.401 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:17:18.401 00:17:18.401 real 0m42.054s 00:17:18.401 user 2m14.901s 00:17:18.401 sys 0m13.028s 00:17:18.401 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:18.401 13:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:18.401 ************************************ 00:17:18.401 END TEST nvmf_host_multipath_status 00:17:18.401 ************************************ 00:17:18.401 13:58:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:18.401 13:58:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:18.401 13:58:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.401 13:58:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.401 ************************************ 00:17:18.401 START TEST nvmf_discovery_remove_ifc 00:17:18.401 ************************************ 00:17:18.401 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:18.401 * Looking for test storage... 00:17:18.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:18.401 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:18.401 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:18.401 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:18.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.661 --rc genhtml_branch_coverage=1 00:17:18.661 --rc genhtml_function_coverage=1 00:17:18.661 --rc genhtml_legend=1 00:17:18.661 --rc geninfo_all_blocks=1 00:17:18.661 --rc geninfo_unexecuted_blocks=1 00:17:18.661 00:17:18.661 ' 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:18.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.661 --rc genhtml_branch_coverage=1 00:17:18.661 --rc genhtml_function_coverage=1 00:17:18.661 --rc genhtml_legend=1 00:17:18.661 --rc geninfo_all_blocks=1 00:17:18.661 --rc geninfo_unexecuted_blocks=1 00:17:18.661 00:17:18.661 ' 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:18.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.661 --rc genhtml_branch_coverage=1 00:17:18.661 --rc genhtml_function_coverage=1 00:17:18.661 --rc genhtml_legend=1 00:17:18.661 --rc geninfo_all_blocks=1 00:17:18.661 --rc geninfo_unexecuted_blocks=1 00:17:18.661 00:17:18.661 ' 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:18.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.661 --rc genhtml_branch_coverage=1 00:17:18.661 --rc genhtml_function_coverage=1 00:17:18.661 --rc genhtml_legend=1 00:17:18.661 --rc geninfo_all_blocks=1 00:17:18.661 --rc geninfo_unexecuted_blocks=1 00:17:18.661 00:17:18.661 ' 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.661 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:18.662 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:18.662 Cannot find device "nvmf_init_br" 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:18.662 Cannot find device "nvmf_init_br2" 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:18.662 Cannot find device "nvmf_tgt_br" 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:18.662 Cannot find device "nvmf_tgt_br2" 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:18.662 Cannot find device "nvmf_init_br" 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:18.662 Cannot find device "nvmf_init_br2" 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:18.662 Cannot find device "nvmf_tgt_br" 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:18.662 Cannot find device "nvmf_tgt_br2" 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:18.662 Cannot find device "nvmf_br" 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:18.662 Cannot find device "nvmf_init_if" 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:17:18.662 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:18.930 Cannot find device "nvmf_init_if2" 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:18.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:18.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:18.930 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:18.930 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:17:18.930 00:17:18.930 --- 10.0.0.3 ping statistics --- 00:17:18.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.930 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:18.930 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:18.930 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:17:18.930 00:17:18.930 --- 10.0.0.4 ping statistics --- 00:17:18.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.930 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:18.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:18.930 00:17:18.930 --- 10.0.0.1 ping statistics --- 00:17:18.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.930 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:18.930 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:19.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:17:19.212 00:17:19.212 --- 10.0.0.2 ping statistics --- 00:17:19.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.212 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=78636 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 78636 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 78636 ']' 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.212 13:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:19.212 [2024-12-11 13:58:12.060463] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:17:19.212 [2024-12-11 13:58:12.060554] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.212 [2024-12-11 13:58:12.215606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.471 [2024-12-11 13:58:12.281574] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.471 [2024-12-11 13:58:12.281650] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.471 [2024-12-11 13:58:12.281664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:19.471 [2024-12-11 13:58:12.281675] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:19.471 [2024-12-11 13:58:12.281683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.471 [2024-12-11 13:58:12.282148] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.471 [2024-12-11 13:58:12.339583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:19.471 13:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.471 13:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:17:19.471 13:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:19.471 13:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:19.471 13:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:19.471 13:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.471 13:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:19.471 13:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.471 13:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:19.471 [2024-12-11 13:58:12.469435] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.471 [2024-12-11 13:58:12.477593] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:17:19.471 null0 00:17:19.471 [2024-12-11 13:58:12.509521] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:19.731 13:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.731 13:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=78655 00:17:19.731 13:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:19.731 13:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 78655 /tmp/host.sock 00:17:19.731 13:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 78655 ']' 00:17:19.731 13:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:17:19.731 13:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.731 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:19.731 13:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:19.731 13:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.731 13:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:19.731 [2024-12-11 13:58:12.592919] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:17:19.731 [2024-12-11 13:58:12.593010] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78655 ] 00:17:19.731 [2024-12-11 13:58:12.741896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.989 [2024-12-11 13:58:12.798369] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.556 13:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.556 13:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:17:20.556 13:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:20.556 13:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:20.556 13:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.556 13:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:20.815 13:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.815 13:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:20.815 13:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.815 13:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:20.815 [2024-12-11 13:58:13.664603] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:20.815 13:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.815 13:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:20.815 13:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.815 13:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:21.749 [2024-12-11 13:58:14.722748] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:21.749 [2024-12-11 13:58:14.722806] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:21.749 [2024-12-11 13:58:14.722832] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:21.749 [2024-12-11 13:58:14.728806] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:17:21.749 [2024-12-11 13:58:14.783199] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:17:21.749 [2024-12-11 13:58:14.784416] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x14cdfb0:1 started. 00:17:21.749 [2024-12-11 13:58:14.786481] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:21.749 [2024-12-11 13:58:14.786546] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:21.749 [2024-12-11 13:58:14.786578] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:21.749 [2024-12-11 13:58:14.786596] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:21.749 [2024-12-11 13:58:14.786622] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:21.749 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.749 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:21.749 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:21.749 [2024-12-11 13:58:14.791440] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x14cdfb0 was disconnected and freed. delete nvme_qpair. 00:17:21.749 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:21.749 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.749 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:21.749 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:21.749 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:21.749 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:22.007 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.007 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:22.007 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:17:22.007 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:22.007 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:22.007 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:22.007 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:22.007 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:22.007 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.007 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:22.007 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:22.007 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:22.007 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.007 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:22.007 13:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:22.940 13:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:22.940 13:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:22.940 13:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:22.940 13:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.940 13:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:22.940 13:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:22.940 13:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:22.940 13:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.940 13:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:22.940 13:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:24.313 13:58:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:24.313 13:58:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:24.313 13:58:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:24.313 13:58:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.313 13:58:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:24.313 13:58:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:24.313 13:58:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:24.313 13:58:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.313 13:58:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:24.313 13:58:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:25.248 13:58:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:25.248 13:58:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:25.248 13:58:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:25.248 13:58:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.248 13:58:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:25.248 13:58:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:25.248 13:58:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:25.248 13:58:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.248 13:58:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:25.248 13:58:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:26.183 13:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:26.183 13:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:26.183 13:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:26.183 13:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.183 13:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:26.183 13:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:26.183 13:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:26.183 13:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.183 13:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:26.183 13:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:27.119 13:58:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:27.119 13:58:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:27.119 13:58:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.119 13:58:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:27.119 13:58:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:27.119 13:58:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:27.119 13:58:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:27.119 13:58:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.377 13:58:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:27.378 13:58:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:27.378 [2024-12-11 13:58:20.213854] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:27.378 [2024-12-11 13:58:20.213955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.378 [2024-12-11 13:58:20.213973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.378 [2024-12-11 13:58:20.213986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.378 [2024-12-11 13:58:20.213996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.378 [2024-12-11 13:58:20.214006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.378 [2024-12-11 13:58:20.214016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.378 [2024-12-11 13:58:20.214027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.378 [2024-12-11 13:58:20.214036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.378 [2024-12-11 13:58:20.214047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.378 [2024-12-11 13:58:20.214055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.378 [2024-12-11 13:58:20.214080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c6e20 is same with the state(6) to be set 00:17:27.378 [2024-12-11 13:58:20.223850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c6e20 (9): Bad file descriptor 00:17:27.378 [2024-12-11 13:58:20.233867] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:17:27.378 [2024-12-11 13:58:20.233888] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:17:27.378 [2024-12-11 13:58:20.233894] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:17:27.378 [2024-12-11 13:58:20.233900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:27.378 [2024-12-11 13:58:20.233956] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:17:28.312 13:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:28.312 13:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:28.312 13:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:28.312 13:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.312 13:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:28.312 13:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:28.312 13:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:28.312 [2024-12-11 13:58:21.248837] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:28.312 [2024-12-11 13:58:21.249262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c6e20 with addr=10.0.0.3, port=4420 00:17:28.312 [2024-12-11 13:58:21.249316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c6e20 is same with the state(6) to be set 00:17:28.312 [2024-12-11 13:58:21.249396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c6e20 (9): Bad file descriptor 00:17:28.312 [2024-12-11 13:58:21.250350] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:17:28.312 [2024-12-11 13:58:21.250435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:17:28.312 [2024-12-11 13:58:21.250463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:17:28.313 [2024-12-11 13:58:21.250486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:17:28.313 [2024-12-11 13:58:21.250506] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:17:28.313 [2024-12-11 13:58:21.250522] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:17:28.313 [2024-12-11 13:58:21.250533] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:17:28.313 [2024-12-11 13:58:21.250555] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:17:28.313 [2024-12-11 13:58:21.250576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:28.313 13:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.313 13:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:28.313 13:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:29.276 [2024-12-11 13:58:22.250661] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:17:29.276 [2024-12-11 13:58:22.250727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:17:29.276 [2024-12-11 13:58:22.250762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:17:29.276 [2024-12-11 13:58:22.250773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:17:29.276 [2024-12-11 13:58:22.250784] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:17:29.276 [2024-12-11 13:58:22.250794] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:17:29.276 [2024-12-11 13:58:22.250802] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:17:29.276 [2024-12-11 13:58:22.250808] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:17:29.276 [2024-12-11 13:58:22.250844] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:17:29.276 [2024-12-11 13:58:22.250894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.276 [2024-12-11 13:58:22.250914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.276 [2024-12-11 13:58:22.250928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.276 [2024-12-11 13:58:22.250937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.276 [2024-12-11 13:58:22.250948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.276 [2024-12-11 13:58:22.250957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.276 [2024-12-11 13:58:22.250967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.276 [2024-12-11 13:58:22.250976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.276 [2024-12-11 13:58:22.250986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.276 [2024-12-11 13:58:22.250995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.276 [2024-12-11 13:58:22.251004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:17:29.276 [2024-12-11 13:58:22.251600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1452a20 (9): Bad file descriptor 00:17:29.276 [2024-12-11 13:58:22.252613] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:29.276 [2024-12-11 13:58:22.252793] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:17:29.276 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:29.276 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:29.276 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:29.276 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.276 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:29.276 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:29.276 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:29.276 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.536 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:29.536 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:29.536 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:29.536 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:29.536 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:29.536 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:29.536 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.536 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:29.536 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:29.536 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:29.536 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:29.536 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.536 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:29.536 13:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:30.470 13:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:30.470 13:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:30.470 13:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:30.470 13:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.470 13:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:30.470 13:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:30.470 13:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:30.470 13:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.470 13:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:30.470 13:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:31.405 [2024-12-11 13:58:24.257257] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:31.405 [2024-12-11 13:58:24.257295] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:31.405 [2024-12-11 13:58:24.257313] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:31.405 [2024-12-11 13:58:24.263302] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:17:31.405 [2024-12-11 13:58:24.326010] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:17:31.405 [2024-12-11 13:58:24.327072] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x14e6060:1 started. 00:17:31.405 [2024-12-11 13:58:24.328746] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:31.405 [2024-12-11 13:58:24.328930] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:31.405 [2024-12-11 13:58:24.328994] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:31.405 [2024-12-11 13:58:24.329108] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:17:31.405 [2024-12-11 13:58:24.329170] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:31.405 [2024-12-11 13:58:24.335897] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x14e6060 was disconnected and freed. delete nvme_qpair. 00:17:31.663 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:31.663 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:31.663 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:31.663 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.663 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:31.663 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:31.663 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:31.663 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.663 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:31.663 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:31.663 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 78655 00:17:31.663 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 78655 ']' 00:17:31.663 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 78655 00:17:31.663 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:17:31.663 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.663 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78655 00:17:31.663 killing process with pid 78655 00:17:31.663 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.663 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.663 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78655' 00:17:31.663 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 78655 00:17:31.663 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 78655 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:31.922 rmmod nvme_tcp 00:17:31.922 rmmod nvme_fabrics 00:17:31.922 rmmod nvme_keyring 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 78636 ']' 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 78636 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 78636 ']' 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 78636 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78636 00:17:31.922 killing process with pid 78636 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78636' 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 78636 00:17:31.922 13:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 78636 00:17:32.181 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:32.181 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:32.181 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:32.181 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:17:32.181 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:17:32.181 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:32.181 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:17:32.181 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:32.181 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:32.181 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:32.181 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:32.181 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:32.181 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:32.181 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:32.181 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:32.181 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:32.181 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:32.181 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:32.181 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:32.440 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:32.440 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:32.440 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:32.440 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:32.440 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.440 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:32.440 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.440 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:17:32.440 00:17:32.440 real 0m14.001s 00:17:32.440 user 0m24.174s 00:17:32.440 sys 0m2.476s 00:17:32.440 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.440 ************************************ 00:17:32.440 END TEST nvmf_discovery_remove_ifc 00:17:32.440 ************************************ 00:17:32.440 13:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:32.440 13:58:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:32.440 13:58:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:32.440 13:58:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.440 13:58:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.440 ************************************ 00:17:32.440 START TEST nvmf_identify_kernel_target 00:17:32.440 ************************************ 00:17:32.440 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:32.440 * Looking for test storage... 00:17:32.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:32.440 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:32.440 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:17:32.440 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:32.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.699 --rc genhtml_branch_coverage=1 00:17:32.699 --rc genhtml_function_coverage=1 00:17:32.699 --rc genhtml_legend=1 00:17:32.699 --rc geninfo_all_blocks=1 00:17:32.699 --rc geninfo_unexecuted_blocks=1 00:17:32.699 00:17:32.699 ' 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:32.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.699 --rc genhtml_branch_coverage=1 00:17:32.699 --rc genhtml_function_coverage=1 00:17:32.699 --rc genhtml_legend=1 00:17:32.699 --rc geninfo_all_blocks=1 00:17:32.699 --rc geninfo_unexecuted_blocks=1 00:17:32.699 00:17:32.699 ' 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:32.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.699 --rc genhtml_branch_coverage=1 00:17:32.699 --rc genhtml_function_coverage=1 00:17:32.699 --rc genhtml_legend=1 00:17:32.699 --rc geninfo_all_blocks=1 00:17:32.699 --rc geninfo_unexecuted_blocks=1 00:17:32.699 00:17:32.699 ' 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:32.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.699 --rc genhtml_branch_coverage=1 00:17:32.699 --rc genhtml_function_coverage=1 00:17:32.699 --rc genhtml_legend=1 00:17:32.699 --rc geninfo_all_blocks=1 00:17:32.699 --rc geninfo_unexecuted_blocks=1 00:17:32.699 00:17:32.699 ' 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.699 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:32.700 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:32.700 Cannot find device "nvmf_init_br" 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:32.700 Cannot find device "nvmf_init_br2" 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:32.700 Cannot find device "nvmf_tgt_br" 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:32.700 Cannot find device "nvmf_tgt_br2" 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:32.700 Cannot find device "nvmf_init_br" 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:32.700 Cannot find device "nvmf_init_br2" 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:32.700 Cannot find device "nvmf_tgt_br" 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:32.700 Cannot find device "nvmf_tgt_br2" 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:32.700 Cannot find device "nvmf_br" 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:32.700 Cannot find device "nvmf_init_if" 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:17:32.700 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:32.959 Cannot find device "nvmf_init_if2" 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:32.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:32.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:32.959 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:32.959 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:17:32.959 00:17:32.959 --- 10.0.0.3 ping statistics --- 00:17:32.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.959 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:32.959 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:32.959 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:17:32.959 00:17:32.959 --- 10.0.0.4 ping statistics --- 00:17:32.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.959 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:32.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:17:32.959 00:17:32.959 --- 10.0.0.1 ping statistics --- 00:17:32.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.959 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:32.959 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:32.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:17:32.959 00:17:32.960 --- 10.0.0.2 ping statistics --- 00:17:32.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.960 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:32.960 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.960 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:17:32.960 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:32.960 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.960 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:32.960 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:32.960 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.960 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:32.960 13:58:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:33.218 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:33.492 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:33.492 Waiting for block devices as requested 00:17:33.492 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:33.492 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:33.773 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:33.773 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:33.773 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:17:33.773 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:33.774 No valid GPT data, bailing 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:33.774 No valid GPT data, bailing 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:33.774 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:34.033 No valid GPT data, bailing 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:34.033 No valid GPT data, bailing 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -a 10.0.0.1 -t tcp -s 4420 00:17:34.033 00:17:34.033 Discovery Log Number of Records 2, Generation counter 2 00:17:34.033 =====Discovery Log Entry 0====== 00:17:34.033 trtype: tcp 00:17:34.033 adrfam: ipv4 00:17:34.033 subtype: current discovery subsystem 00:17:34.033 treq: not specified, sq flow control disable supported 00:17:34.033 portid: 1 00:17:34.033 trsvcid: 4420 00:17:34.033 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:34.033 traddr: 10.0.0.1 00:17:34.033 eflags: none 00:17:34.033 sectype: none 00:17:34.033 =====Discovery Log Entry 1====== 00:17:34.033 trtype: tcp 00:17:34.033 adrfam: ipv4 00:17:34.033 subtype: nvme subsystem 00:17:34.033 treq: not specified, sq flow control disable supported 00:17:34.033 portid: 1 00:17:34.033 trsvcid: 4420 00:17:34.033 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:34.033 traddr: 10.0.0.1 00:17:34.033 eflags: none 00:17:34.033 sectype: none 00:17:34.033 13:58:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:34.033 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:34.292 ===================================================== 00:17:34.292 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:34.292 ===================================================== 00:17:34.292 Controller Capabilities/Features 00:17:34.292 ================================ 00:17:34.292 Vendor ID: 0000 00:17:34.292 Subsystem Vendor ID: 0000 00:17:34.292 Serial Number: 9fff900ec2cdd4963558 00:17:34.292 Model Number: Linux 00:17:34.292 Firmware Version: 6.8.9-20 00:17:34.292 Recommended Arb Burst: 0 00:17:34.292 IEEE OUI Identifier: 00 00 00 00:17:34.292 Multi-path I/O 00:17:34.292 May have multiple subsystem ports: No 00:17:34.292 May have multiple controllers: No 00:17:34.292 Associated with SR-IOV VF: No 00:17:34.292 Max Data Transfer Size: Unlimited 00:17:34.292 Max Number of Namespaces: 0 00:17:34.292 Max Number of I/O Queues: 1024 00:17:34.292 NVMe Specification Version (VS): 1.3 00:17:34.292 NVMe Specification Version (Identify): 1.3 00:17:34.292 Maximum Queue Entries: 1024 00:17:34.292 Contiguous Queues Required: No 00:17:34.292 Arbitration Mechanisms Supported 00:17:34.292 Weighted Round Robin: Not Supported 00:17:34.292 Vendor Specific: Not Supported 00:17:34.292 Reset Timeout: 7500 ms 00:17:34.292 Doorbell Stride: 4 bytes 00:17:34.292 NVM Subsystem Reset: Not Supported 00:17:34.292 Command Sets Supported 00:17:34.292 NVM Command Set: Supported 00:17:34.292 Boot Partition: Not Supported 00:17:34.292 Memory Page Size Minimum: 4096 bytes 00:17:34.292 Memory Page Size Maximum: 4096 bytes 00:17:34.292 Persistent Memory Region: Not Supported 00:17:34.292 Optional Asynchronous Events Supported 00:17:34.292 Namespace Attribute Notices: Not Supported 00:17:34.292 Firmware Activation Notices: Not Supported 00:17:34.292 ANA Change Notices: Not Supported 00:17:34.292 PLE Aggregate Log Change Notices: Not Supported 00:17:34.292 LBA Status Info Alert Notices: Not Supported 00:17:34.292 EGE Aggregate Log Change Notices: Not Supported 00:17:34.292 Normal NVM Subsystem Shutdown event: Not Supported 00:17:34.292 Zone Descriptor Change Notices: Not Supported 00:17:34.292 Discovery Log Change Notices: Supported 00:17:34.292 Controller Attributes 00:17:34.292 128-bit Host Identifier: Not Supported 00:17:34.292 Non-Operational Permissive Mode: Not Supported 00:17:34.292 NVM Sets: Not Supported 00:17:34.292 Read Recovery Levels: Not Supported 00:17:34.292 Endurance Groups: Not Supported 00:17:34.292 Predictable Latency Mode: Not Supported 00:17:34.292 Traffic Based Keep ALive: Not Supported 00:17:34.292 Namespace Granularity: Not Supported 00:17:34.292 SQ Associations: Not Supported 00:17:34.292 UUID List: Not Supported 00:17:34.292 Multi-Domain Subsystem: Not Supported 00:17:34.292 Fixed Capacity Management: Not Supported 00:17:34.292 Variable Capacity Management: Not Supported 00:17:34.292 Delete Endurance Group: Not Supported 00:17:34.292 Delete NVM Set: Not Supported 00:17:34.292 Extended LBA Formats Supported: Not Supported 00:17:34.292 Flexible Data Placement Supported: Not Supported 00:17:34.292 00:17:34.292 Controller Memory Buffer Support 00:17:34.292 ================================ 00:17:34.292 Supported: No 00:17:34.292 00:17:34.292 Persistent Memory Region Support 00:17:34.292 ================================ 00:17:34.292 Supported: No 00:17:34.292 00:17:34.292 Admin Command Set Attributes 00:17:34.293 ============================ 00:17:34.293 Security Send/Receive: Not Supported 00:17:34.293 Format NVM: Not Supported 00:17:34.293 Firmware Activate/Download: Not Supported 00:17:34.293 Namespace Management: Not Supported 00:17:34.293 Device Self-Test: Not Supported 00:17:34.293 Directives: Not Supported 00:17:34.293 NVMe-MI: Not Supported 00:17:34.293 Virtualization Management: Not Supported 00:17:34.293 Doorbell Buffer Config: Not Supported 00:17:34.293 Get LBA Status Capability: Not Supported 00:17:34.293 Command & Feature Lockdown Capability: Not Supported 00:17:34.293 Abort Command Limit: 1 00:17:34.293 Async Event Request Limit: 1 00:17:34.293 Number of Firmware Slots: N/A 00:17:34.293 Firmware Slot 1 Read-Only: N/A 00:17:34.293 Firmware Activation Without Reset: N/A 00:17:34.293 Multiple Update Detection Support: N/A 00:17:34.293 Firmware Update Granularity: No Information Provided 00:17:34.293 Per-Namespace SMART Log: No 00:17:34.293 Asymmetric Namespace Access Log Page: Not Supported 00:17:34.293 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:34.293 Command Effects Log Page: Not Supported 00:17:34.293 Get Log Page Extended Data: Supported 00:17:34.293 Telemetry Log Pages: Not Supported 00:17:34.293 Persistent Event Log Pages: Not Supported 00:17:34.293 Supported Log Pages Log Page: May Support 00:17:34.293 Commands Supported & Effects Log Page: Not Supported 00:17:34.293 Feature Identifiers & Effects Log Page:May Support 00:17:34.293 NVMe-MI Commands & Effects Log Page: May Support 00:17:34.293 Data Area 4 for Telemetry Log: Not Supported 00:17:34.293 Error Log Page Entries Supported: 1 00:17:34.293 Keep Alive: Not Supported 00:17:34.293 00:17:34.293 NVM Command Set Attributes 00:17:34.293 ========================== 00:17:34.293 Submission Queue Entry Size 00:17:34.293 Max: 1 00:17:34.293 Min: 1 00:17:34.293 Completion Queue Entry Size 00:17:34.293 Max: 1 00:17:34.293 Min: 1 00:17:34.293 Number of Namespaces: 0 00:17:34.293 Compare Command: Not Supported 00:17:34.293 Write Uncorrectable Command: Not Supported 00:17:34.293 Dataset Management Command: Not Supported 00:17:34.293 Write Zeroes Command: Not Supported 00:17:34.293 Set Features Save Field: Not Supported 00:17:34.293 Reservations: Not Supported 00:17:34.293 Timestamp: Not Supported 00:17:34.293 Copy: Not Supported 00:17:34.293 Volatile Write Cache: Not Present 00:17:34.293 Atomic Write Unit (Normal): 1 00:17:34.293 Atomic Write Unit (PFail): 1 00:17:34.293 Atomic Compare & Write Unit: 1 00:17:34.293 Fused Compare & Write: Not Supported 00:17:34.293 Scatter-Gather List 00:17:34.293 SGL Command Set: Supported 00:17:34.293 SGL Keyed: Not Supported 00:17:34.293 SGL Bit Bucket Descriptor: Not Supported 00:17:34.293 SGL Metadata Pointer: Not Supported 00:17:34.293 Oversized SGL: Not Supported 00:17:34.293 SGL Metadata Address: Not Supported 00:17:34.293 SGL Offset: Supported 00:17:34.293 Transport SGL Data Block: Not Supported 00:17:34.293 Replay Protected Memory Block: Not Supported 00:17:34.293 00:17:34.293 Firmware Slot Information 00:17:34.293 ========================= 00:17:34.293 Active slot: 0 00:17:34.293 00:17:34.293 00:17:34.293 Error Log 00:17:34.293 ========= 00:17:34.293 00:17:34.293 Active Namespaces 00:17:34.293 ================= 00:17:34.293 Discovery Log Page 00:17:34.293 ================== 00:17:34.293 Generation Counter: 2 00:17:34.293 Number of Records: 2 00:17:34.293 Record Format: 0 00:17:34.293 00:17:34.293 Discovery Log Entry 0 00:17:34.293 ---------------------- 00:17:34.293 Transport Type: 3 (TCP) 00:17:34.293 Address Family: 1 (IPv4) 00:17:34.293 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:34.293 Entry Flags: 00:17:34.293 Duplicate Returned Information: 0 00:17:34.293 Explicit Persistent Connection Support for Discovery: 0 00:17:34.293 Transport Requirements: 00:17:34.293 Secure Channel: Not Specified 00:17:34.293 Port ID: 1 (0x0001) 00:17:34.293 Controller ID: 65535 (0xffff) 00:17:34.293 Admin Max SQ Size: 32 00:17:34.293 Transport Service Identifier: 4420 00:17:34.293 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:34.293 Transport Address: 10.0.0.1 00:17:34.293 Discovery Log Entry 1 00:17:34.293 ---------------------- 00:17:34.293 Transport Type: 3 (TCP) 00:17:34.293 Address Family: 1 (IPv4) 00:17:34.293 Subsystem Type: 2 (NVM Subsystem) 00:17:34.293 Entry Flags: 00:17:34.293 Duplicate Returned Information: 0 00:17:34.293 Explicit Persistent Connection Support for Discovery: 0 00:17:34.293 Transport Requirements: 00:17:34.293 Secure Channel: Not Specified 00:17:34.293 Port ID: 1 (0x0001) 00:17:34.293 Controller ID: 65535 (0xffff) 00:17:34.293 Admin Max SQ Size: 32 00:17:34.293 Transport Service Identifier: 4420 00:17:34.293 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:34.293 Transport Address: 10.0.0.1 00:17:34.293 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:34.552 get_feature(0x01) failed 00:17:34.552 get_feature(0x02) failed 00:17:34.552 get_feature(0x04) failed 00:17:34.552 ===================================================== 00:17:34.552 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:34.552 ===================================================== 00:17:34.552 Controller Capabilities/Features 00:17:34.552 ================================ 00:17:34.552 Vendor ID: 0000 00:17:34.552 Subsystem Vendor ID: 0000 00:17:34.552 Serial Number: e7ee22b72d8711f6ddc6 00:17:34.552 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:34.552 Firmware Version: 6.8.9-20 00:17:34.552 Recommended Arb Burst: 6 00:17:34.552 IEEE OUI Identifier: 00 00 00 00:17:34.552 Multi-path I/O 00:17:34.552 May have multiple subsystem ports: Yes 00:17:34.552 May have multiple controllers: Yes 00:17:34.552 Associated with SR-IOV VF: No 00:17:34.552 Max Data Transfer Size: Unlimited 00:17:34.552 Max Number of Namespaces: 1024 00:17:34.552 Max Number of I/O Queues: 128 00:17:34.552 NVMe Specification Version (VS): 1.3 00:17:34.552 NVMe Specification Version (Identify): 1.3 00:17:34.552 Maximum Queue Entries: 1024 00:17:34.552 Contiguous Queues Required: No 00:17:34.552 Arbitration Mechanisms Supported 00:17:34.552 Weighted Round Robin: Not Supported 00:17:34.552 Vendor Specific: Not Supported 00:17:34.552 Reset Timeout: 7500 ms 00:17:34.552 Doorbell Stride: 4 bytes 00:17:34.552 NVM Subsystem Reset: Not Supported 00:17:34.552 Command Sets Supported 00:17:34.552 NVM Command Set: Supported 00:17:34.552 Boot Partition: Not Supported 00:17:34.552 Memory Page Size Minimum: 4096 bytes 00:17:34.552 Memory Page Size Maximum: 4096 bytes 00:17:34.552 Persistent Memory Region: Not Supported 00:17:34.552 Optional Asynchronous Events Supported 00:17:34.552 Namespace Attribute Notices: Supported 00:17:34.552 Firmware Activation Notices: Not Supported 00:17:34.553 ANA Change Notices: Supported 00:17:34.553 PLE Aggregate Log Change Notices: Not Supported 00:17:34.553 LBA Status Info Alert Notices: Not Supported 00:17:34.553 EGE Aggregate Log Change Notices: Not Supported 00:17:34.553 Normal NVM Subsystem Shutdown event: Not Supported 00:17:34.553 Zone Descriptor Change Notices: Not Supported 00:17:34.553 Discovery Log Change Notices: Not Supported 00:17:34.553 Controller Attributes 00:17:34.553 128-bit Host Identifier: Supported 00:17:34.553 Non-Operational Permissive Mode: Not Supported 00:17:34.553 NVM Sets: Not Supported 00:17:34.553 Read Recovery Levels: Not Supported 00:17:34.553 Endurance Groups: Not Supported 00:17:34.553 Predictable Latency Mode: Not Supported 00:17:34.553 Traffic Based Keep ALive: Supported 00:17:34.553 Namespace Granularity: Not Supported 00:17:34.553 SQ Associations: Not Supported 00:17:34.553 UUID List: Not Supported 00:17:34.553 Multi-Domain Subsystem: Not Supported 00:17:34.553 Fixed Capacity Management: Not Supported 00:17:34.553 Variable Capacity Management: Not Supported 00:17:34.553 Delete Endurance Group: Not Supported 00:17:34.553 Delete NVM Set: Not Supported 00:17:34.553 Extended LBA Formats Supported: Not Supported 00:17:34.553 Flexible Data Placement Supported: Not Supported 00:17:34.553 00:17:34.553 Controller Memory Buffer Support 00:17:34.553 ================================ 00:17:34.553 Supported: No 00:17:34.553 00:17:34.553 Persistent Memory Region Support 00:17:34.553 ================================ 00:17:34.553 Supported: No 00:17:34.553 00:17:34.553 Admin Command Set Attributes 00:17:34.553 ============================ 00:17:34.553 Security Send/Receive: Not Supported 00:17:34.553 Format NVM: Not Supported 00:17:34.553 Firmware Activate/Download: Not Supported 00:17:34.553 Namespace Management: Not Supported 00:17:34.553 Device Self-Test: Not Supported 00:17:34.553 Directives: Not Supported 00:17:34.553 NVMe-MI: Not Supported 00:17:34.553 Virtualization Management: Not Supported 00:17:34.553 Doorbell Buffer Config: Not Supported 00:17:34.553 Get LBA Status Capability: Not Supported 00:17:34.553 Command & Feature Lockdown Capability: Not Supported 00:17:34.553 Abort Command Limit: 4 00:17:34.553 Async Event Request Limit: 4 00:17:34.553 Number of Firmware Slots: N/A 00:17:34.553 Firmware Slot 1 Read-Only: N/A 00:17:34.553 Firmware Activation Without Reset: N/A 00:17:34.553 Multiple Update Detection Support: N/A 00:17:34.553 Firmware Update Granularity: No Information Provided 00:17:34.553 Per-Namespace SMART Log: Yes 00:17:34.553 Asymmetric Namespace Access Log Page: Supported 00:17:34.553 ANA Transition Time : 10 sec 00:17:34.553 00:17:34.553 Asymmetric Namespace Access Capabilities 00:17:34.553 ANA Optimized State : Supported 00:17:34.553 ANA Non-Optimized State : Supported 00:17:34.553 ANA Inaccessible State : Supported 00:17:34.553 ANA Persistent Loss State : Supported 00:17:34.553 ANA Change State : Supported 00:17:34.553 ANAGRPID is not changed : No 00:17:34.553 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:34.553 00:17:34.553 ANA Group Identifier Maximum : 128 00:17:34.553 Number of ANA Group Identifiers : 128 00:17:34.553 Max Number of Allowed Namespaces : 1024 00:17:34.553 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:34.553 Command Effects Log Page: Supported 00:17:34.553 Get Log Page Extended Data: Supported 00:17:34.553 Telemetry Log Pages: Not Supported 00:17:34.553 Persistent Event Log Pages: Not Supported 00:17:34.553 Supported Log Pages Log Page: May Support 00:17:34.553 Commands Supported & Effects Log Page: Not Supported 00:17:34.553 Feature Identifiers & Effects Log Page:May Support 00:17:34.553 NVMe-MI Commands & Effects Log Page: May Support 00:17:34.553 Data Area 4 for Telemetry Log: Not Supported 00:17:34.553 Error Log Page Entries Supported: 128 00:17:34.553 Keep Alive: Supported 00:17:34.553 Keep Alive Granularity: 1000 ms 00:17:34.553 00:17:34.553 NVM Command Set Attributes 00:17:34.553 ========================== 00:17:34.553 Submission Queue Entry Size 00:17:34.553 Max: 64 00:17:34.553 Min: 64 00:17:34.553 Completion Queue Entry Size 00:17:34.553 Max: 16 00:17:34.553 Min: 16 00:17:34.553 Number of Namespaces: 1024 00:17:34.553 Compare Command: Not Supported 00:17:34.553 Write Uncorrectable Command: Not Supported 00:17:34.553 Dataset Management Command: Supported 00:17:34.553 Write Zeroes Command: Supported 00:17:34.553 Set Features Save Field: Not Supported 00:17:34.553 Reservations: Not Supported 00:17:34.553 Timestamp: Not Supported 00:17:34.553 Copy: Not Supported 00:17:34.553 Volatile Write Cache: Present 00:17:34.553 Atomic Write Unit (Normal): 1 00:17:34.553 Atomic Write Unit (PFail): 1 00:17:34.553 Atomic Compare & Write Unit: 1 00:17:34.553 Fused Compare & Write: Not Supported 00:17:34.553 Scatter-Gather List 00:17:34.553 SGL Command Set: Supported 00:17:34.553 SGL Keyed: Not Supported 00:17:34.553 SGL Bit Bucket Descriptor: Not Supported 00:17:34.553 SGL Metadata Pointer: Not Supported 00:17:34.553 Oversized SGL: Not Supported 00:17:34.553 SGL Metadata Address: Not Supported 00:17:34.553 SGL Offset: Supported 00:17:34.553 Transport SGL Data Block: Not Supported 00:17:34.553 Replay Protected Memory Block: Not Supported 00:17:34.553 00:17:34.553 Firmware Slot Information 00:17:34.553 ========================= 00:17:34.553 Active slot: 0 00:17:34.553 00:17:34.553 Asymmetric Namespace Access 00:17:34.553 =========================== 00:17:34.553 Change Count : 0 00:17:34.553 Number of ANA Group Descriptors : 1 00:17:34.553 ANA Group Descriptor : 0 00:17:34.553 ANA Group ID : 1 00:17:34.553 Number of NSID Values : 1 00:17:34.553 Change Count : 0 00:17:34.553 ANA State : 1 00:17:34.553 Namespace Identifier : 1 00:17:34.553 00:17:34.553 Commands Supported and Effects 00:17:34.553 ============================== 00:17:34.553 Admin Commands 00:17:34.553 -------------- 00:17:34.553 Get Log Page (02h): Supported 00:17:34.553 Identify (06h): Supported 00:17:34.553 Abort (08h): Supported 00:17:34.553 Set Features (09h): Supported 00:17:34.553 Get Features (0Ah): Supported 00:17:34.553 Asynchronous Event Request (0Ch): Supported 00:17:34.553 Keep Alive (18h): Supported 00:17:34.553 I/O Commands 00:17:34.553 ------------ 00:17:34.553 Flush (00h): Supported 00:17:34.553 Write (01h): Supported LBA-Change 00:17:34.553 Read (02h): Supported 00:17:34.553 Write Zeroes (08h): Supported LBA-Change 00:17:34.553 Dataset Management (09h): Supported 00:17:34.553 00:17:34.553 Error Log 00:17:34.553 ========= 00:17:34.553 Entry: 0 00:17:34.553 Error Count: 0x3 00:17:34.553 Submission Queue Id: 0x0 00:17:34.553 Command Id: 0x5 00:17:34.553 Phase Bit: 0 00:17:34.553 Status Code: 0x2 00:17:34.553 Status Code Type: 0x0 00:17:34.553 Do Not Retry: 1 00:17:34.553 Error Location: 0x28 00:17:34.553 LBA: 0x0 00:17:34.553 Namespace: 0x0 00:17:34.553 Vendor Log Page: 0x0 00:17:34.553 ----------- 00:17:34.553 Entry: 1 00:17:34.553 Error Count: 0x2 00:17:34.553 Submission Queue Id: 0x0 00:17:34.553 Command Id: 0x5 00:17:34.553 Phase Bit: 0 00:17:34.553 Status Code: 0x2 00:17:34.553 Status Code Type: 0x0 00:17:34.553 Do Not Retry: 1 00:17:34.553 Error Location: 0x28 00:17:34.553 LBA: 0x0 00:17:34.553 Namespace: 0x0 00:17:34.553 Vendor Log Page: 0x0 00:17:34.553 ----------- 00:17:34.553 Entry: 2 00:17:34.553 Error Count: 0x1 00:17:34.553 Submission Queue Id: 0x0 00:17:34.553 Command Id: 0x4 00:17:34.553 Phase Bit: 0 00:17:34.553 Status Code: 0x2 00:17:34.553 Status Code Type: 0x0 00:17:34.553 Do Not Retry: 1 00:17:34.553 Error Location: 0x28 00:17:34.553 LBA: 0x0 00:17:34.553 Namespace: 0x0 00:17:34.553 Vendor Log Page: 0x0 00:17:34.553 00:17:34.553 Number of Queues 00:17:34.553 ================ 00:17:34.553 Number of I/O Submission Queues: 128 00:17:34.553 Number of I/O Completion Queues: 128 00:17:34.553 00:17:34.553 ZNS Specific Controller Data 00:17:34.553 ============================ 00:17:34.553 Zone Append Size Limit: 0 00:17:34.553 00:17:34.553 00:17:34.553 Active Namespaces 00:17:34.553 ================= 00:17:34.553 get_feature(0x05) failed 00:17:34.553 Namespace ID:1 00:17:34.553 Command Set Identifier: NVM (00h) 00:17:34.553 Deallocate: Supported 00:17:34.553 Deallocated/Unwritten Error: Not Supported 00:17:34.553 Deallocated Read Value: Unknown 00:17:34.553 Deallocate in Write Zeroes: Not Supported 00:17:34.553 Deallocated Guard Field: 0xFFFF 00:17:34.553 Flush: Supported 00:17:34.553 Reservation: Not Supported 00:17:34.553 Namespace Sharing Capabilities: Multiple Controllers 00:17:34.553 Size (in LBAs): 1310720 (5GiB) 00:17:34.553 Capacity (in LBAs): 1310720 (5GiB) 00:17:34.554 Utilization (in LBAs): 1310720 (5GiB) 00:17:34.554 UUID: f1ac17b7-a477-4dc6-b057-cc68bab87e0e 00:17:34.554 Thin Provisioning: Not Supported 00:17:34.554 Per-NS Atomic Units: Yes 00:17:34.554 Atomic Boundary Size (Normal): 0 00:17:34.554 Atomic Boundary Size (PFail): 0 00:17:34.554 Atomic Boundary Offset: 0 00:17:34.554 NGUID/EUI64 Never Reused: No 00:17:34.554 ANA group ID: 1 00:17:34.554 Namespace Write Protected: No 00:17:34.554 Number of LBA Formats: 1 00:17:34.554 Current LBA Format: LBA Format #00 00:17:34.554 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:34.554 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:34.554 rmmod nvme_tcp 00:17:34.554 rmmod nvme_fabrics 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:34.554 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:34.813 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:34.813 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:34.813 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:34.813 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:34.813 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:34.813 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.813 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.813 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.813 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:17:34.813 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:34.813 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:34.813 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:17:34.813 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:34.813 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:34.813 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:34.813 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:34.813 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:17:34.813 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:17:34.813 13:58:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:35.749 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:35.749 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:35.749 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:35.749 00:17:35.749 real 0m3.287s 00:17:35.749 user 0m1.150s 00:17:35.749 sys 0m1.476s 00:17:35.749 13:58:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.749 13:58:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.749 ************************************ 00:17:35.749 END TEST nvmf_identify_kernel_target 00:17:35.749 ************************************ 00:17:35.749 13:58:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:35.749 13:58:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:35.749 13:58:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.749 13:58:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.749 ************************************ 00:17:35.749 START TEST nvmf_auth_host 00:17:35.749 ************************************ 00:17:35.749 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:36.008 * Looking for test storage... 00:17:36.008 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:36.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.008 --rc genhtml_branch_coverage=1 00:17:36.008 --rc genhtml_function_coverage=1 00:17:36.008 --rc genhtml_legend=1 00:17:36.008 --rc geninfo_all_blocks=1 00:17:36.008 --rc geninfo_unexecuted_blocks=1 00:17:36.008 00:17:36.008 ' 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:36.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.008 --rc genhtml_branch_coverage=1 00:17:36.008 --rc genhtml_function_coverage=1 00:17:36.008 --rc genhtml_legend=1 00:17:36.008 --rc geninfo_all_blocks=1 00:17:36.008 --rc geninfo_unexecuted_blocks=1 00:17:36.008 00:17:36.008 ' 00:17:36.008 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:36.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.009 --rc genhtml_branch_coverage=1 00:17:36.009 --rc genhtml_function_coverage=1 00:17:36.009 --rc genhtml_legend=1 00:17:36.009 --rc geninfo_all_blocks=1 00:17:36.009 --rc geninfo_unexecuted_blocks=1 00:17:36.009 00:17:36.009 ' 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:36.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.009 --rc genhtml_branch_coverage=1 00:17:36.009 --rc genhtml_function_coverage=1 00:17:36.009 --rc genhtml_legend=1 00:17:36.009 --rc geninfo_all_blocks=1 00:17:36.009 --rc geninfo_unexecuted_blocks=1 00:17:36.009 00:17:36.009 ' 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:36.009 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:36.009 Cannot find device "nvmf_init_br" 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:36.009 Cannot find device "nvmf_init_br2" 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:36.009 Cannot find device "nvmf_tgt_br" 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:36.009 Cannot find device "nvmf_tgt_br2" 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:17:36.009 13:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:36.009 Cannot find device "nvmf_init_br" 00:17:36.009 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:17:36.009 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:36.009 Cannot find device "nvmf_init_br2" 00:17:36.009 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:17:36.010 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:36.010 Cannot find device "nvmf_tgt_br" 00:17:36.010 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:17:36.010 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:36.010 Cannot find device "nvmf_tgt_br2" 00:17:36.010 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:17:36.010 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:36.268 Cannot find device "nvmf_br" 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:36.268 Cannot find device "nvmf_init_if" 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:36.268 Cannot find device "nvmf_init_if2" 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:36.268 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:36.268 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:36.268 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:36.269 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:36.269 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:36.269 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:36.269 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:36.269 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:36.269 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:36.269 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:36.269 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:36.269 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:36.269 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:36.269 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:17:36.269 00:17:36.269 --- 10.0.0.3 ping statistics --- 00:17:36.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.269 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:17:36.269 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:36.527 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:36.527 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.074 ms 00:17:36.527 00:17:36.528 --- 10.0.0.4 ping statistics --- 00:17:36.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.528 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:36.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:36.528 00:17:36.528 --- 10.0.0.1 ping statistics --- 00:17:36.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.528 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:36.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:17:36.528 00:17:36.528 --- 10.0.0.2 ping statistics --- 00:17:36.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.528 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=79655 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 79655 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 79655 ']' 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.528 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.786 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.786 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:17:36.786 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:36.787 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:36.787 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.787 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.787 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:36.787 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:36.787 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:36.787 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:36.787 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:36.787 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:36.787 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:37.045 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:37.045 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=37a8ba7b5b3659601a675f58498e747a 00:17:37.045 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:37.045 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.cLS 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 37a8ba7b5b3659601a675f58498e747a 0 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 37a8ba7b5b3659601a675f58498e747a 0 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=37a8ba7b5b3659601a675f58498e747a 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.cLS 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.cLS 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.cLS 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2d584ebfca2eefa3b4d1ad6fc5299fc5f1b539a85e4291013789695b41c81d9c 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.7zg 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2d584ebfca2eefa3b4d1ad6fc5299fc5f1b539a85e4291013789695b41c81d9c 3 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2d584ebfca2eefa3b4d1ad6fc5299fc5f1b539a85e4291013789695b41c81d9c 3 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2d584ebfca2eefa3b4d1ad6fc5299fc5f1b539a85e4291013789695b41c81d9c 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.7zg 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.7zg 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.7zg 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bd510df98deb4b695838496120c97b509fcf420706b88a59 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.f28 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bd510df98deb4b695838496120c97b509fcf420706b88a59 0 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bd510df98deb4b695838496120c97b509fcf420706b88a59 0 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bd510df98deb4b695838496120c97b509fcf420706b88a59 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:37.046 13:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:37.046 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.f28 00:17:37.046 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.f28 00:17:37.046 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.f28 00:17:37.046 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:37.046 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:37.046 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.046 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:37.046 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:37.046 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:37.046 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:37.046 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0c6aa523188d53271d6c6c1b09d0cdc1e30917c10add9147 00:17:37.046 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:37.046 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.2ev 00:17:37.046 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0c6aa523188d53271d6c6c1b09d0cdc1e30917c10add9147 2 00:17:37.046 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0c6aa523188d53271d6c6c1b09d0cdc1e30917c10add9147 2 00:17:37.046 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:37.046 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:37.046 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0c6aa523188d53271d6c6c1b09d0cdc1e30917c10add9147 00:17:37.046 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:37.046 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:37.046 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.2ev 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.2ev 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.2ev 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e8e4a61dbb120a928d663f8282887a0a 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.pRn 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e8e4a61dbb120a928d663f8282887a0a 1 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e8e4a61dbb120a928d663f8282887a0a 1 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e8e4a61dbb120a928d663f8282887a0a 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.pRn 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.pRn 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.pRn 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c9a7575216d587f1c4459ff7d4c6f485 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.J1A 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c9a7575216d587f1c4459ff7d4c6f485 1 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c9a7575216d587f1c4459ff7d4c6f485 1 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c9a7575216d587f1c4459ff7d4c6f485 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.J1A 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.J1A 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.J1A 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=17790d8331305c1d8e0fb0894b93690090b0875bcd08a6ac 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Y7I 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 17790d8331305c1d8e0fb0894b93690090b0875bcd08a6ac 2 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 17790d8331305c1d8e0fb0894b93690090b0875bcd08a6ac 2 00:17:37.305 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=17790d8331305c1d8e0fb0894b93690090b0875bcd08a6ac 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Y7I 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Y7I 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Y7I 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5ab66ae85e9011b1363bfbb6e9190785 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.q6e 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5ab66ae85e9011b1363bfbb6e9190785 0 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5ab66ae85e9011b1363bfbb6e9190785 0 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5ab66ae85e9011b1363bfbb6e9190785 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:37.306 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.q6e 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.q6e 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.q6e 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1e52ca4ac9c1c737b5c6249ac7c8e1a2f5cbb5266018b704a3bb70c230df9e83 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Jxo 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1e52ca4ac9c1c737b5c6249ac7c8e1a2f5cbb5266018b704a3bb70c230df9e83 3 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1e52ca4ac9c1c737b5c6249ac7c8e1a2f5cbb5266018b704a3bb70c230df9e83 3 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1e52ca4ac9c1c737b5c6249ac7c8e1a2f5cbb5266018b704a3bb70c230df9e83 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Jxo 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Jxo 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Jxo 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 79655 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 79655 ']' 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.564 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.565 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.823 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.823 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:17:37.823 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:37.823 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cLS 00:17:37.823 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.823 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.823 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.823 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.7zg ]] 00:17:37.823 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7zg 00:17:37.823 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.823 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.823 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.823 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:37.823 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.f28 00:17:37.823 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.823 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.823 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.823 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.2ev ]] 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2ev 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.pRn 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.J1A ]] 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.J1A 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Y7I 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.q6e ]] 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.q6e 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Jxo 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:37.824 13:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:38.391 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:38.391 Waiting for block devices as requested 00:17:38.391 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:38.391 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:38.958 13:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:38.958 13:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:38.958 13:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:17:38.958 13:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:17:38.958 13:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:38.958 13:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:38.958 13:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:17:38.958 13:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:38.958 13:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:38.958 No valid GPT data, bailing 00:17:38.958 13:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:39.217 No valid GPT data, bailing 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:39.217 No valid GPT data, bailing 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:39.217 No valid GPT data, bailing 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:17:39.217 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -a 10.0.0.1 -t tcp -s 4420 00:17:39.477 00:17:39.477 Discovery Log Number of Records 2, Generation counter 2 00:17:39.477 =====Discovery Log Entry 0====== 00:17:39.477 trtype: tcp 00:17:39.477 adrfam: ipv4 00:17:39.477 subtype: current discovery subsystem 00:17:39.477 treq: not specified, sq flow control disable supported 00:17:39.477 portid: 1 00:17:39.477 trsvcid: 4420 00:17:39.477 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:39.477 traddr: 10.0.0.1 00:17:39.477 eflags: none 00:17:39.477 sectype: none 00:17:39.477 =====Discovery Log Entry 1====== 00:17:39.477 trtype: tcp 00:17:39.477 adrfam: ipv4 00:17:39.477 subtype: nvme subsystem 00:17:39.477 treq: not specified, sq flow control disable supported 00:17:39.477 portid: 1 00:17:39.477 trsvcid: 4420 00:17:39.477 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:39.477 traddr: 10.0.0.1 00:17:39.477 eflags: none 00:17:39.477 sectype: none 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: ]] 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.477 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.736 nvme0n1 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: ]] 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.736 nvme0n1 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.736 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: ]] 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:40.034 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.035 nvme0n1 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: ]] 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.035 13:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.294 nvme0n1 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: ]] 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.294 nvme0n1 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.294 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.553 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.553 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:40.553 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:40.553 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:40.553 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.553 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.553 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:40.553 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.553 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:40.553 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:40.553 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:40.553 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:40.553 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.553 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.553 nvme0n1 00:17:40.553 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.553 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.553 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.553 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.553 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.554 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.554 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.554 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.554 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.554 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.554 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.554 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.554 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.554 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:40.554 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.554 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:40.554 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:40.554 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:40.554 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:17:40.554 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:17:40.554 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:40.554 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: ]] 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.812 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.071 nvme0n1 00:17:41.071 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.071 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.071 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.071 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.071 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.071 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.071 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.071 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.071 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.071 13:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: ]] 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.071 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.330 nvme0n1 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: ]] 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:41.330 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:41.331 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:41.331 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.331 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.331 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.331 nvme0n1 00:17:41.331 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.331 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.331 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.331 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.331 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.331 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: ]] 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.590 nvme0n1 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.590 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.849 nvme0n1 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:41.849 13:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:42.416 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:17:42.416 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: ]] 00:17:42.416 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:17:42.416 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:42.416 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.416 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:42.417 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:42.417 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:42.417 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.417 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:42.417 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.417 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.417 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.417 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.417 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:42.417 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:42.417 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:42.417 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.417 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.417 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:42.417 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.417 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:42.417 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:42.417 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:42.417 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.417 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.417 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.675 nvme0n1 00:17:42.675 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.675 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.675 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.675 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.675 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.675 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.675 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.675 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.675 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.675 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.933 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.933 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.933 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:42.933 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.933 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:42.933 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:42.933 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:42.933 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:42.933 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:42.933 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:42.933 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:42.933 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: ]] 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.934 nvme0n1 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.934 13:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: ]] 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:43.192 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:43.193 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.193 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.193 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.193 nvme0n1 00:17:43.193 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.193 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.193 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.193 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.193 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: ]] 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.451 nvme0n1 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.451 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.710 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.968 nvme0n1 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:43.968 13:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: ]] 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.868 13:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.126 nvme0n1 00:17:46.126 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.126 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.126 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.126 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.126 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.126 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.126 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.126 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.126 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.126 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: ]] 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.127 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.695 nvme0n1 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: ]] 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.695 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.953 nvme0n1 00:17:46.953 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.953 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.953 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.953 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.953 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.953 13:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: ]] 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.212 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.470 nvme0n1 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.471 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.037 nvme0n1 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: ]] 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.037 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.038 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.038 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:48.038 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:48.038 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:48.038 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.038 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.038 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:48.038 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.038 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:48.038 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:48.038 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:48.038 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.038 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.038 13:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.604 nvme0n1 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: ]] 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.604 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.605 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.605 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:48.605 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:48.605 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:48.605 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.605 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.605 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:48.605 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.605 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:48.605 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:48.605 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:48.605 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.605 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.605 13:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.540 nvme0n1 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: ]] 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.540 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.213 nvme0n1 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:50.213 13:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:17:50.213 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: ]] 00:17:50.213 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:17:50.213 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:50.213 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.213 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:50.213 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:50.213 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:50.213 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.213 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:50.213 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.213 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.213 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.213 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.213 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:50.213 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:50.213 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:50.213 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.214 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.214 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:50.214 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.214 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:50.214 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:50.214 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:50.214 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:50.214 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.214 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.779 nvme0n1 00:17:50.779 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.779 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.779 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.779 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.779 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.779 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.779 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.779 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.779 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.779 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.779 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.779 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.779 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:50.779 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.779 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:50.779 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:50.779 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:50.779 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:17:50.779 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.780 13:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.346 nvme0n1 00:17:51.346 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.346 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.346 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.346 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.346 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.346 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.346 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.346 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.346 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.346 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.604 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.604 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:51.604 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.604 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.604 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:51.604 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.604 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:51.604 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:51.604 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:51.604 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:17:51.604 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:17:51.604 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:51.604 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:51.604 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:17:51.604 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: ]] 00:17:51.604 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:17:51.604 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:51.604 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.604 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.605 nvme0n1 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: ]] 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.605 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.863 nvme0n1 00:17:51.863 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.863 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: ]] 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.864 nvme0n1 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.864 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.126 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.126 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.126 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.126 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.126 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.126 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.126 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:52.126 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.126 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:52.126 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:52.126 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:52.126 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:17:52.126 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:17:52.126 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:52.126 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:52.126 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:17:52.126 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: ]] 00:17:52.126 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:17:52.126 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:52.126 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.126 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:52.127 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:52.127 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:52.127 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.127 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:52.127 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.127 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.127 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.127 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.127 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:52.127 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:52.127 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:52.127 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.127 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.127 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:52.127 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.127 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:52.127 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:52.127 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:52.127 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:52.127 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.127 13:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.127 nvme0n1 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.127 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.388 nvme0n1 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: ]] 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.388 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.646 nvme0n1 00:17:52.646 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.646 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.646 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.646 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.646 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.646 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.646 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.646 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.646 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.646 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.646 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.646 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.646 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:52.646 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: ]] 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.647 nvme0n1 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.647 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: ]] 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.905 nvme0n1 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.905 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: ]] 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.163 13:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.163 nvme0n1 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.163 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.422 nvme0n1 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: ]] 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.422 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.680 nvme0n1 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: ]] 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.680 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.938 nvme0n1 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: ]] 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.939 13:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.197 nvme0n1 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: ]] 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.197 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.456 nvme0n1 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.456 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.715 nvme0n1 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: ]] 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.715 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.973 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.973 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.973 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:54.973 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:54.973 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:54.973 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.973 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.973 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:54.973 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.974 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:54.974 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:54.974 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:54.974 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.974 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.974 13:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.233 nvme0n1 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: ]] 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.233 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.549 nvme0n1 00:17:55.549 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.549 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.549 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.549 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.549 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.549 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.807 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.807 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.807 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.807 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.807 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.807 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.807 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:55.807 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.807 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:55.807 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:55.807 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:55.807 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:17:55.807 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:17:55.807 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:55.807 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:55.807 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:17:55.807 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: ]] 00:17:55.807 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:17:55.807 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:55.807 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.807 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:55.808 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:55.808 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:55.808 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.808 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:55.808 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.808 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.808 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.808 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.808 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:55.808 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:55.808 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:55.808 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.808 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.808 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:55.808 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.808 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:55.808 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:55.808 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:55.808 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.808 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.808 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.066 nvme0n1 00:17:56.066 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.066 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.066 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.066 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.066 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.066 13:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: ]] 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.066 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.632 nvme0n1 00:17:56.632 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.632 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.632 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.632 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.632 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.632 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.632 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.632 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.633 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.892 nvme0n1 00:17:56.892 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.892 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.892 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.892 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.892 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.892 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.892 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.892 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.892 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.892 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: ]] 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.149 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.150 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.150 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.150 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:57.150 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:57.150 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:57.150 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.150 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.150 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:57.150 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.150 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:57.150 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:57.150 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:57.150 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.150 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.150 13:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.729 nvme0n1 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: ]] 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.729 13:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.295 nvme0n1 00:17:58.295 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.295 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:58.295 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.295 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.295 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:58.295 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.295 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.295 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: ]] 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.296 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.861 nvme0n1 00:17:58.861 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.861 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:58.861 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.861 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.861 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: ]] 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:59.119 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:59.120 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:59.120 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:59.120 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.120 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.120 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.120 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:59.120 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:59.120 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:59.120 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:59.120 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:59.120 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:59.120 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:59.120 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:59.120 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:59.120 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:59.120 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:59.120 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:59.120 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.120 13:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.686 nvme0n1 00:17:59.686 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.686 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:59.686 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:59.686 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.686 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.686 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.686 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.686 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:59.686 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.686 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.686 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.687 13:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.254 nvme0n1 00:18:00.254 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.254 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:00.254 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.254 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.254 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:00.254 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.254 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.254 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:00.254 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.254 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: ]] 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.512 nvme0n1 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.512 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: ]] 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.513 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.771 nvme0n1 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: ]] 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.771 nvme0n1 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.771 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: ]] 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:01.030 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:01.031 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.031 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.031 nvme0n1 00:18:01.031 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.031 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:01.031 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.031 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:01.031 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.031 13:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.031 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.290 nvme0n1 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: ]] 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.290 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.549 nvme0n1 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: ]] 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.549 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.808 nvme0n1 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: ]] 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:01.808 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.809 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.809 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:01.809 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.809 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:01.809 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:01.809 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:01.809 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.809 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.809 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.809 nvme0n1 00:18:01.809 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.809 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:01.809 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.809 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.809 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:01.809 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: ]] 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.068 13:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.068 nvme0n1 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.068 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.326 nvme0n1 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: ]] 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:02.326 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:02.327 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.327 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.327 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:02.327 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.327 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:02.327 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:02.327 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:02.327 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.327 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.327 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.586 nvme0n1 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: ]] 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.586 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.844 nvme0n1 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:18:02.844 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: ]] 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.845 13:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.103 nvme0n1 00:18:03.103 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.103 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.103 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.103 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.103 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: ]] 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.104 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.376 nvme0n1 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.376 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.648 nvme0n1 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: ]] 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.648 13:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.215 nvme0n1 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: ]] 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:18:04.215 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.216 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.474 nvme0n1 00:18:04.474 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.474 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.474 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.474 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.474 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.474 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.474 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.474 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.474 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.474 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.474 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.474 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.474 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:18:04.474 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.474 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:04.474 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:04.474 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:04.474 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:18:04.474 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:18:04.474 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: ]] 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.475 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.042 nvme0n1 00:18:05.042 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.042 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:05.042 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.042 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:05.042 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.042 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.042 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.042 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:05.042 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.042 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.042 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.042 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:05.042 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:18:05.042 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:05.042 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:05.042 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:05.042 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:05.042 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:18:05.042 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:18:05.042 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: ]] 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.043 13:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.301 nvme0n1 00:18:05.301 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.301 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:05.301 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.301 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:05.301 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.301 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.301 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.301 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:05.301 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.301 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.560 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.819 nvme0n1 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdhOGJhN2I1YjM2NTk2MDFhNjc1ZjU4NDk4ZTc0N2FjkNpP: 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: ]] 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ1ODRlYmZjYTJlZWZhM2I0ZDFhZDZmYzUyOTlmYzVmMWI1MzlhODVlNDI5MTAxMzc4OTY5NWI0MWM4MWQ5Y6oiMGg=: 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:05.819 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:05.820 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.820 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.820 13:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.754 nvme0n1 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: ]] 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.754 13:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.321 nvme0n1 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: ]] 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.321 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.889 nvme0n1 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTc3OTBkODMzMTMwNWMxZDhlMGZiMDg5NGI5MzY5MDA5MGIwODc1YmNkMDhhNmFjP72lmQ==: 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: ]] 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFiNjZhZTg1ZTkwMTFiMTM2M2JmYmI2ZTkxOTA3ODXuzkH7: 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:07.889 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:08.147 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:08.147 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.147 13:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.714 nvme0n1 00:18:08.714 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.714 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:08.714 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.714 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:08.714 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.714 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.714 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.714 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:08.714 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.714 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU1MmNhNGFjOWMxYzczN2I1YzYyNDlhYzdjOGUxYTJmNWNiYjUyNjYwMThiNzA0YTNiYjcwYzIzMGRmOWU4M9/cwsU=: 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.715 13:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.283 nvme0n1 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: ]] 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.283 request: 00:18:09.283 { 00:18:09.283 "name": "nvme0", 00:18:09.283 "trtype": "tcp", 00:18:09.283 "traddr": "10.0.0.1", 00:18:09.283 "adrfam": "ipv4", 00:18:09.283 "trsvcid": "4420", 00:18:09.283 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:09.283 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:09.283 "prchk_reftag": false, 00:18:09.283 "prchk_guard": false, 00:18:09.283 "hdgst": false, 00:18:09.283 "ddgst": false, 00:18:09.283 "allow_unrecognized_csi": false, 00:18:09.283 "method": "bdev_nvme_attach_controller", 00:18:09.283 "req_id": 1 00:18:09.283 } 00:18:09.283 Got JSON-RPC error response 00:18:09.283 response: 00:18:09.283 { 00:18:09.283 "code": -5, 00:18:09.283 "message": "Input/output error" 00:18:09.283 } 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:09.283 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:09.542 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.543 request: 00:18:09.543 { 00:18:09.543 "name": "nvme0", 00:18:09.543 "trtype": "tcp", 00:18:09.543 "traddr": "10.0.0.1", 00:18:09.543 "adrfam": "ipv4", 00:18:09.543 "trsvcid": "4420", 00:18:09.543 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:09.543 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:09.543 "prchk_reftag": false, 00:18:09.543 "prchk_guard": false, 00:18:09.543 "hdgst": false, 00:18:09.543 "ddgst": false, 00:18:09.543 "dhchap_key": "key2", 00:18:09.543 "allow_unrecognized_csi": false, 00:18:09.543 "method": "bdev_nvme_attach_controller", 00:18:09.543 "req_id": 1 00:18:09.543 } 00:18:09.543 Got JSON-RPC error response 00:18:09.543 response: 00:18:09.543 { 00:18:09.543 "code": -5, 00:18:09.543 "message": "Input/output error" 00:18:09.543 } 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.543 request: 00:18:09.543 { 00:18:09.543 "name": "nvme0", 00:18:09.543 "trtype": "tcp", 00:18:09.543 "traddr": "10.0.0.1", 00:18:09.543 "adrfam": "ipv4", 00:18:09.543 "trsvcid": "4420", 00:18:09.543 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:09.543 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:09.543 "prchk_reftag": false, 00:18:09.543 "prchk_guard": false, 00:18:09.543 "hdgst": false, 00:18:09.543 "ddgst": false, 00:18:09.543 "dhchap_key": "key1", 00:18:09.543 "dhchap_ctrlr_key": "ckey2", 00:18:09.543 "allow_unrecognized_csi": false, 00:18:09.543 "method": "bdev_nvme_attach_controller", 00:18:09.543 "req_id": 1 00:18:09.543 } 00:18:09.543 Got JSON-RPC error response 00:18:09.543 response: 00:18:09.543 { 00:18:09.543 "code": -5, 00:18:09.543 "message": "Input/output error" 00:18:09.543 } 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.543 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.804 nvme0n1 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: ]] 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.804 request: 00:18:09.804 { 00:18:09.804 "name": "nvme0", 00:18:09.804 "dhchap_key": "key1", 00:18:09.804 "dhchap_ctrlr_key": "ckey2", 00:18:09.804 "method": "bdev_nvme_set_keys", 00:18:09.804 "req_id": 1 00:18:09.804 } 00:18:09.804 Got JSON-RPC error response 00:18:09.804 response: 00:18:09.804 { 00:18:09.804 "code": -13, 00:18:09.804 "message": "Permission denied" 00:18:09.804 } 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:18:09.804 13:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ1MTBkZjk4ZGViNGI2OTU4Mzg0OTYxMjBjOTdiNTA5ZmNmNDIwNzA2Yjg4YTU5xjH5Lg==: 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: ]] 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM2YWE1MjMxODhkNTMyNzFkNmM2YzFiMDlkMGNkYzFlMzA5MTdjMTBhZGQ5MTQ34U71oA==: 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.185 nvme0n1 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThlNGE2MWRiYjEyMGE5MjhkNjYzZjgyODI4ODdhMGGV+XdH: 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: ]] 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhNzU3NTIxNmQ1ODdmMWM0NDU5ZmY3ZDRjNmY0ODX5t72B: 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:11.185 13:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.185 13:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:11.185 13:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.185 13:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.185 request: 00:18:11.185 { 00:18:11.185 "name": "nvme0", 00:18:11.185 "dhchap_key": "key2", 00:18:11.185 "dhchap_ctrlr_key": "ckey1", 00:18:11.185 "method": "bdev_nvme_set_keys", 00:18:11.185 "req_id": 1 00:18:11.185 } 00:18:11.185 Got JSON-RPC error response 00:18:11.185 response: 00:18:11.185 { 00:18:11.185 "code": -13, 00:18:11.185 "message": "Permission denied" 00:18:11.185 } 00:18:11.185 13:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:11.185 13:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:11.185 13:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:11.185 13:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:11.185 13:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:11.185 13:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:18:11.185 13:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:18:11.185 13:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.185 13:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.185 13:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.185 13:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:18:11.185 13:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:18:12.120 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.120 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:18:12.120 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.120 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.120 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.120 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:18:12.120 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:18:12.120 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:18:12.120 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:18:12.120 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:12.120 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:18:12.378 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:12.378 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:18:12.378 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:12.378 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:12.378 rmmod nvme_tcp 00:18:12.378 rmmod nvme_fabrics 00:18:12.378 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:12.378 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:18:12.378 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:18:12.378 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 79655 ']' 00:18:12.378 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 79655 00:18:12.378 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 79655 ']' 00:18:12.378 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 79655 00:18:12.378 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:18:12.378 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.378 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79655 00:18:12.378 killing process with pid 79655 00:18:12.378 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:12.378 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:12.378 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79655' 00:18:12.378 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 79655 00:18:12.378 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 79655 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:12.636 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.894 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:18:12.894 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:12.894 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:12.894 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:18:12.894 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:18:12.894 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:18:12.894 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:12.894 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:12.894 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:12.894 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:12.894 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:18:12.894 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:18:12.894 13:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:13.460 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:13.717 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:13.717 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:13.717 13:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.cLS /tmp/spdk.key-null.f28 /tmp/spdk.key-sha256.pRn /tmp/spdk.key-sha384.Y7I /tmp/spdk.key-sha512.Jxo /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:18:13.717 13:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:13.975 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:13.975 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:13.975 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:14.234 00:18:14.234 real 0m38.307s 00:18:14.234 user 0m34.833s 00:18:14.234 sys 0m3.889s 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.234 ************************************ 00:18:14.234 END TEST nvmf_auth_host 00:18:14.234 ************************************ 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.234 ************************************ 00:18:14.234 START TEST nvmf_digest 00:18:14.234 ************************************ 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:14.234 * Looking for test storage... 00:18:14.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:18:14.234 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:18:14.493 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:14.493 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:14.493 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:18:14.493 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:14.493 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:14.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.493 --rc genhtml_branch_coverage=1 00:18:14.493 --rc genhtml_function_coverage=1 00:18:14.493 --rc genhtml_legend=1 00:18:14.493 --rc geninfo_all_blocks=1 00:18:14.493 --rc geninfo_unexecuted_blocks=1 00:18:14.493 00:18:14.493 ' 00:18:14.493 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:14.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.493 --rc genhtml_branch_coverage=1 00:18:14.493 --rc genhtml_function_coverage=1 00:18:14.493 --rc genhtml_legend=1 00:18:14.493 --rc geninfo_all_blocks=1 00:18:14.493 --rc geninfo_unexecuted_blocks=1 00:18:14.493 00:18:14.493 ' 00:18:14.493 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:14.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.493 --rc genhtml_branch_coverage=1 00:18:14.493 --rc genhtml_function_coverage=1 00:18:14.493 --rc genhtml_legend=1 00:18:14.494 --rc geninfo_all_blocks=1 00:18:14.494 --rc geninfo_unexecuted_blocks=1 00:18:14.494 00:18:14.494 ' 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:14.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.494 --rc genhtml_branch_coverage=1 00:18:14.494 --rc genhtml_function_coverage=1 00:18:14.494 --rc genhtml_legend=1 00:18:14.494 --rc geninfo_all_blocks=1 00:18:14.494 --rc geninfo_unexecuted_blocks=1 00:18:14.494 00:18:14.494 ' 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:14.494 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:14.494 Cannot find device "nvmf_init_br" 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:14.494 Cannot find device "nvmf_init_br2" 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:14.494 Cannot find device "nvmf_tgt_br" 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:14.494 Cannot find device "nvmf_tgt_br2" 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:14.494 Cannot find device "nvmf_init_br" 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:14.494 Cannot find device "nvmf_init_br2" 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:14.494 Cannot find device "nvmf_tgt_br" 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:14.494 Cannot find device "nvmf_tgt_br2" 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:14.494 Cannot find device "nvmf_br" 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:14.494 Cannot find device "nvmf_init_if" 00:18:14.494 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:18:14.495 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:14.495 Cannot find device "nvmf_init_if2" 00:18:14.495 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:18:14.495 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:14.495 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.495 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:18:14.495 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:14.495 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.495 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:18:14.495 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:14.495 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:14.495 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:14.495 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:14.495 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:14.495 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:14.495 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:14.495 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:14.495 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:14.495 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:14.495 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:14.753 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:14.753 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:18:14.753 00:18:14.753 --- 10.0.0.3 ping statistics --- 00:18:14.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.753 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:18:14.753 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:14.753 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:14.753 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:18:14.753 00:18:14.754 --- 10.0.0.4 ping statistics --- 00:18:14.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.754 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:14.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:18:14.754 00:18:14.754 --- 10.0.0.1 ping statistics --- 00:18:14.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.754 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:14.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:18:14.754 00:18:14.754 --- 10.0.0.2 ping statistics --- 00:18:14.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.754 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:14.754 ************************************ 00:18:14.754 START TEST nvmf_digest_clean 00:18:14.754 ************************************ 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=81311 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 81311 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 81311 ']' 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.754 13:59:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:14.754 [2024-12-11 13:59:07.782152] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:14.754 [2024-12-11 13:59:07.782284] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.012 [2024-12-11 13:59:07.938251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.012 [2024-12-11 13:59:08.004927] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.012 [2024-12-11 13:59:08.004989] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.012 [2024-12-11 13:59:08.005003] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.012 [2024-12-11 13:59:08.005014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.012 [2024-12-11 13:59:08.005023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.012 [2024-12-11 13:59:08.005487] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:15.946 [2024-12-11 13:59:08.860315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:15.946 null0 00:18:15.946 [2024-12-11 13:59:08.913299] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.946 [2024-12-11 13:59:08.937437] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=81343 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 81343 /var/tmp/bperf.sock 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 81343 ']' 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.946 13:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:16.204 [2024-12-11 13:59:08.993499] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:16.204 [2024-12-11 13:59:08.993584] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81343 ] 00:18:16.204 [2024-12-11 13:59:09.143645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.204 [2024-12-11 13:59:09.212151] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.185 13:59:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.185 13:59:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:17.185 13:59:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:17.186 13:59:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:17.186 13:59:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:17.443 [2024-12-11 13:59:10.277403] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:17.443 13:59:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:17.443 13:59:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:17.700 nvme0n1 00:18:17.700 13:59:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:17.700 13:59:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:17.959 Running I/O for 2 seconds... 00:18:19.827 14859.00 IOPS, 58.04 MiB/s [2024-12-11T13:59:12.874Z] 14986.00 IOPS, 58.54 MiB/s 00:18:19.827 Latency(us) 00:18:19.827 [2024-12-11T13:59:12.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.827 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:19.827 nvme0n1 : 2.01 14970.59 58.48 0.00 0.00 8543.65 8221.79 19303.33 00:18:19.827 [2024-12-11T13:59:12.874Z] =================================================================================================================== 00:18:19.827 [2024-12-11T13:59:12.874Z] Total : 14970.59 58.48 0.00 0.00 8543.65 8221.79 19303.33 00:18:19.827 { 00:18:19.827 "results": [ 00:18:19.827 { 00:18:19.827 "job": "nvme0n1", 00:18:19.827 "core_mask": "0x2", 00:18:19.827 "workload": "randread", 00:18:19.827 "status": "finished", 00:18:19.827 "queue_depth": 128, 00:18:19.827 "io_size": 4096, 00:18:19.827 "runtime": 2.010609, 00:18:19.827 "iops": 14970.588513231563, 00:18:19.827 "mibps": 58.478861379810795, 00:18:19.827 "io_failed": 0, 00:18:19.827 "io_timeout": 0, 00:18:19.827 "avg_latency_us": 8543.6458679553, 00:18:19.827 "min_latency_us": 8221.789090909091, 00:18:19.827 "max_latency_us": 19303.33090909091 00:18:19.827 } 00:18:19.827 ], 00:18:19.827 "core_count": 1 00:18:19.827 } 00:18:19.827 13:59:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:19.827 13:59:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:19.827 13:59:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:19.827 | select(.opcode=="crc32c") 00:18:19.827 | "\(.module_name) \(.executed)"' 00:18:19.827 13:59:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:19.827 13:59:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:20.085 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:20.085 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:20.085 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:20.085 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:20.085 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 81343 00:18:20.085 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 81343 ']' 00:18:20.085 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 81343 00:18:20.085 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:20.085 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.085 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81343 00:18:20.085 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:20.085 killing process with pid 81343 00:18:20.085 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:20.085 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81343' 00:18:20.085 Received shutdown signal, test time was about 2.000000 seconds 00:18:20.085 00:18:20.085 Latency(us) 00:18:20.085 [2024-12-11T13:59:13.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.085 [2024-12-11T13:59:13.132Z] =================================================================================================================== 00:18:20.085 [2024-12-11T13:59:13.132Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:20.085 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 81343 00:18:20.085 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 81343 00:18:20.343 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:18:20.343 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:20.343 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:20.343 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:20.343 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:20.343 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:20.343 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:20.343 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=81409 00:18:20.343 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:20.343 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 81409 /var/tmp/bperf.sock 00:18:20.343 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 81409 ']' 00:18:20.343 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:20.343 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:20.343 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:20.343 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.343 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:20.343 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:20.343 Zero copy mechanism will not be used. 00:18:20.343 [2024-12-11 13:59:13.349048] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:20.343 [2024-12-11 13:59:13.349148] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81409 ] 00:18:20.601 [2024-12-11 13:59:13.492105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.601 [2024-12-11 13:59:13.553504] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.601 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.601 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:20.601 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:20.601 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:20.601 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:21.167 [2024-12-11 13:59:13.907673] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:21.167 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:21.167 13:59:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:21.425 nvme0n1 00:18:21.425 13:59:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:21.425 13:59:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:21.425 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:21.425 Zero copy mechanism will not be used. 00:18:21.425 Running I/O for 2 seconds... 00:18:23.734 7440.00 IOPS, 930.00 MiB/s [2024-12-11T13:59:16.781Z] 7536.00 IOPS, 942.00 MiB/s 00:18:23.734 Latency(us) 00:18:23.734 [2024-12-11T13:59:16.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.734 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:23.734 nvme0n1 : 2.00 7533.22 941.65 0.00 0.00 2120.66 1787.35 8340.95 00:18:23.734 [2024-12-11T13:59:16.781Z] =================================================================================================================== 00:18:23.734 [2024-12-11T13:59:16.781Z] Total : 7533.22 941.65 0.00 0.00 2120.66 1787.35 8340.95 00:18:23.734 { 00:18:23.734 "results": [ 00:18:23.734 { 00:18:23.734 "job": "nvme0n1", 00:18:23.734 "core_mask": "0x2", 00:18:23.734 "workload": "randread", 00:18:23.734 "status": "finished", 00:18:23.734 "queue_depth": 16, 00:18:23.734 "io_size": 131072, 00:18:23.734 "runtime": 2.002861, 00:18:23.734 "iops": 7533.223723463585, 00:18:23.734 "mibps": 941.6529654329481, 00:18:23.734 "io_failed": 0, 00:18:23.734 "io_timeout": 0, 00:18:23.734 "avg_latency_us": 2120.657306468717, 00:18:23.734 "min_latency_us": 1787.3454545454545, 00:18:23.734 "max_latency_us": 8340.945454545454 00:18:23.734 } 00:18:23.734 ], 00:18:23.734 "core_count": 1 00:18:23.734 } 00:18:23.734 13:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:23.734 13:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:23.734 13:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:23.734 13:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:23.734 13:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:23.734 | select(.opcode=="crc32c") 00:18:23.734 | "\(.module_name) \(.executed)"' 00:18:24.002 13:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:24.002 13:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:24.002 13:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:24.002 13:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:24.002 13:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 81409 00:18:24.002 13:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 81409 ']' 00:18:24.002 13:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 81409 00:18:24.002 13:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:24.002 13:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.002 13:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81409 00:18:24.002 13:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:24.002 13:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:24.002 killing process with pid 81409 00:18:24.002 13:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81409' 00:18:24.002 Received shutdown signal, test time was about 2.000000 seconds 00:18:24.002 00:18:24.002 Latency(us) 00:18:24.002 [2024-12-11T13:59:17.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.002 [2024-12-11T13:59:17.049Z] =================================================================================================================== 00:18:24.002 [2024-12-11T13:59:17.049Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:24.002 13:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 81409 00:18:24.002 13:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 81409 00:18:24.002 13:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:18:24.002 13:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:24.002 13:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:24.002 13:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:24.002 13:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:24.002 13:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:24.002 13:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:24.002 13:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:24.002 13:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=81456 00:18:24.002 13:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 81456 /var/tmp/bperf.sock 00:18:24.002 13:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 81456 ']' 00:18:24.002 13:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:24.002 13:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:24.002 13:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:24.002 13:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.002 13:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:24.260 [2024-12-11 13:59:17.058740] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:24.260 [2024-12-11 13:59:17.058845] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81456 ] 00:18:24.260 [2024-12-11 13:59:17.210589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.260 [2024-12-11 13:59:17.268288] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.194 13:59:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.194 13:59:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:25.194 13:59:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:25.194 13:59:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:25.194 13:59:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:25.452 [2024-12-11 13:59:18.388502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:25.452 13:59:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:25.452 13:59:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:26.016 nvme0n1 00:18:26.016 13:59:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:26.016 13:59:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:26.016 Running I/O for 2 seconds... 00:18:28.324 16511.00 IOPS, 64.50 MiB/s [2024-12-11T13:59:21.371Z] 16383.50 IOPS, 64.00 MiB/s 00:18:28.324 Latency(us) 00:18:28.324 [2024-12-11T13:59:21.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.324 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:28.324 nvme0n1 : 2.00 16407.15 64.09 0.00 0.00 7794.53 5838.66 16205.27 00:18:28.324 [2024-12-11T13:59:21.371Z] =================================================================================================================== 00:18:28.324 [2024-12-11T13:59:21.371Z] Total : 16407.15 64.09 0.00 0.00 7794.53 5838.66 16205.27 00:18:28.324 { 00:18:28.324 "results": [ 00:18:28.324 { 00:18:28.324 "job": "nvme0n1", 00:18:28.324 "core_mask": "0x2", 00:18:28.324 "workload": "randwrite", 00:18:28.324 "status": "finished", 00:18:28.324 "queue_depth": 128, 00:18:28.324 "io_size": 4096, 00:18:28.324 "runtime": 2.004918, 00:18:28.324 "iops": 16407.154806331233, 00:18:28.324 "mibps": 64.09044846223138, 00:18:28.324 "io_failed": 0, 00:18:28.324 "io_timeout": 0, 00:18:28.324 "avg_latency_us": 7794.530485484116, 00:18:28.324 "min_latency_us": 5838.6618181818185, 00:18:28.324 "max_latency_us": 16205.265454545455 00:18:28.324 } 00:18:28.324 ], 00:18:28.324 "core_count": 1 00:18:28.324 } 00:18:28.324 13:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:28.324 13:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:28.324 13:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:28.324 13:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:28.324 | select(.opcode=="crc32c") 00:18:28.324 | "\(.module_name) \(.executed)"' 00:18:28.324 13:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:28.324 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:28.324 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:28.324 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:28.324 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:28.324 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 81456 00:18:28.324 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 81456 ']' 00:18:28.324 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 81456 00:18:28.324 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:28.324 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.324 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81456 00:18:28.324 killing process with pid 81456 00:18:28.324 Received shutdown signal, test time was about 2.000000 seconds 00:18:28.324 00:18:28.324 Latency(us) 00:18:28.324 [2024-12-11T13:59:21.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.325 [2024-12-11T13:59:21.372Z] =================================================================================================================== 00:18:28.325 [2024-12-11T13:59:21.372Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:28.325 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:28.325 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:28.325 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81456' 00:18:28.325 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 81456 00:18:28.325 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 81456 00:18:28.583 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:18:28.583 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:28.583 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:28.583 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:28.583 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:28.583 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:28.583 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:28.583 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:28.583 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=81523 00:18:28.583 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 81523 /var/tmp/bperf.sock 00:18:28.583 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 81523 ']' 00:18:28.583 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:28.583 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.583 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:28.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:28.583 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.583 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:28.583 [2024-12-11 13:59:21.533905] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:28.583 [2024-12-11 13:59:21.534018] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81523 ] 00:18:28.583 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:28.583 Zero copy mechanism will not be used. 00:18:28.841 [2024-12-11 13:59:21.675546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.842 [2024-12-11 13:59:21.729116] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.842 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.842 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:28.842 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:28.842 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:28.842 13:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:29.099 [2024-12-11 13:59:22.134849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:29.358 13:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:29.358 13:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:29.616 nvme0n1 00:18:29.616 13:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:29.616 13:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:29.616 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:29.616 Zero copy mechanism will not be used. 00:18:29.616 Running I/O for 2 seconds... 00:18:31.926 6566.00 IOPS, 820.75 MiB/s [2024-12-11T13:59:24.973Z] 6548.50 IOPS, 818.56 MiB/s 00:18:31.926 Latency(us) 00:18:31.926 [2024-12-11T13:59:24.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.926 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:31.926 nvme0n1 : 2.00 6545.08 818.14 0.00 0.00 2438.64 1817.13 10783.65 00:18:31.926 [2024-12-11T13:59:24.973Z] =================================================================================================================== 00:18:31.926 [2024-12-11T13:59:24.973Z] Total : 6545.08 818.14 0.00 0.00 2438.64 1817.13 10783.65 00:18:31.926 { 00:18:31.926 "results": [ 00:18:31.926 { 00:18:31.926 "job": "nvme0n1", 00:18:31.926 "core_mask": "0x2", 00:18:31.926 "workload": "randwrite", 00:18:31.926 "status": "finished", 00:18:31.926 "queue_depth": 16, 00:18:31.926 "io_size": 131072, 00:18:31.926 "runtime": 2.003184, 00:18:31.926 "iops": 6545.080232270226, 00:18:31.926 "mibps": 818.1350290337782, 00:18:31.926 "io_failed": 0, 00:18:31.926 "io_timeout": 0, 00:18:31.926 "avg_latency_us": 2438.641859923312, 00:18:31.926 "min_latency_us": 1817.1345454545456, 00:18:31.926 "max_latency_us": 10783.65090909091 00:18:31.926 } 00:18:31.926 ], 00:18:31.926 "core_count": 1 00:18:31.926 } 00:18:31.926 13:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:31.926 13:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:31.926 13:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:31.926 13:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:31.926 | select(.opcode=="crc32c") 00:18:31.926 | "\(.module_name) \(.executed)"' 00:18:31.926 13:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:32.185 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:32.185 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:32.185 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:32.185 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:32.185 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 81523 00:18:32.185 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 81523 ']' 00:18:32.185 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 81523 00:18:32.185 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:32.185 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.185 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81523 00:18:32.185 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:32.185 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:32.185 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81523' 00:18:32.185 killing process with pid 81523 00:18:32.185 Received shutdown signal, test time was about 2.000000 seconds 00:18:32.185 00:18:32.185 Latency(us) 00:18:32.185 [2024-12-11T13:59:25.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.185 [2024-12-11T13:59:25.232Z] =================================================================================================================== 00:18:32.185 [2024-12-11T13:59:25.232Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:32.185 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 81523 00:18:32.185 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 81523 00:18:32.443 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 81311 00:18:32.443 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 81311 ']' 00:18:32.443 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 81311 00:18:32.443 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:32.443 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.443 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81311 00:18:32.443 killing process with pid 81311 00:18:32.443 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:32.443 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:32.443 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81311' 00:18:32.443 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 81311 00:18:32.443 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 81311 00:18:32.443 00:18:32.443 real 0m17.758s 00:18:32.443 user 0m34.697s 00:18:32.443 sys 0m4.576s 00:18:32.443 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:32.443 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:32.443 ************************************ 00:18:32.443 END TEST nvmf_digest_clean 00:18:32.443 ************************************ 00:18:32.701 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:18:32.701 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:32.702 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:32.702 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:32.702 ************************************ 00:18:32.702 START TEST nvmf_digest_error 00:18:32.702 ************************************ 00:18:32.702 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:18:32.702 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:18:32.702 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:32.702 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:32.702 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.702 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=81599 00:18:32.702 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 81599 00:18:32.702 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:32.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.702 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 81599 ']' 00:18:32.702 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.702 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.702 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.702 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.702 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.702 [2024-12-11 13:59:25.588904] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:32.702 [2024-12-11 13:59:25.589016] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.702 [2024-12-11 13:59:25.732820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.960 [2024-12-11 13:59:25.782887] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.960 [2024-12-11 13:59:25.782951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.960 [2024-12-11 13:59:25.782978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.960 [2024-12-11 13:59:25.782985] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.960 [2024-12-11 13:59:25.782993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.960 [2024-12-11 13:59:25.783448] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.960 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.960 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:32.960 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:32.960 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:32.960 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.960 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.960 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:32.960 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.960 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.960 [2024-12-11 13:59:25.911914] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:32.960 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.960 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:18:32.960 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:18:32.960 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.960 13:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.960 [2024-12-11 13:59:25.975026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:33.219 null0 00:18:33.219 [2024-12-11 13:59:26.031461] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.219 [2024-12-11 13:59:26.055587] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:33.219 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.219 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:18:33.219 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:33.219 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:33.219 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:33.219 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:33.219 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=81622 00:18:33.219 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 81622 /var/tmp/bperf.sock 00:18:33.219 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:33.219 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 81622 ']' 00:18:33.219 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:33.219 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.219 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:33.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:33.219 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.219 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:33.219 [2024-12-11 13:59:26.112993] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:33.219 [2024-12-11 13:59:26.113235] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81622 ] 00:18:33.219 [2024-12-11 13:59:26.256363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.478 [2024-12-11 13:59:26.313739] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.478 [2024-12-11 13:59:26.368410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:33.478 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.478 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:33.478 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:33.478 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:33.736 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:33.736 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.736 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:33.736 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.736 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:33.736 13:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:34.304 nvme0n1 00:18:34.304 13:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:34.304 13:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.304 13:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:34.304 13:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.304 13:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:34.304 13:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:34.304 Running I/O for 2 seconds... 00:18:34.304 [2024-12-11 13:59:27.219932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.304 [2024-12-11 13:59:27.219987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.304 [2024-12-11 13:59:27.220003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.304 [2024-12-11 13:59:27.237056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.304 [2024-12-11 13:59:27.237101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.305 [2024-12-11 13:59:27.237116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.305 [2024-12-11 13:59:27.254312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.305 [2024-12-11 13:59:27.254510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.305 [2024-12-11 13:59:27.254528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.305 [2024-12-11 13:59:27.271749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.305 [2024-12-11 13:59:27.271936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.305 [2024-12-11 13:59:27.272070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.305 [2024-12-11 13:59:27.289423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.305 [2024-12-11 13:59:27.289605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.305 [2024-12-11 13:59:27.289748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.305 [2024-12-11 13:59:27.306867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.305 [2024-12-11 13:59:27.307044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.305 [2024-12-11 13:59:27.307200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.305 [2024-12-11 13:59:27.324408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.305 [2024-12-11 13:59:27.324626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.305 [2024-12-11 13:59:27.324766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.305 [2024-12-11 13:59:27.341938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.305 [2024-12-11 13:59:27.342168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.305 [2024-12-11 13:59:27.342328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.562 [2024-12-11 13:59:27.359949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.562 [2024-12-11 13:59:27.360273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.562 [2024-12-11 13:59:27.360447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.562 [2024-12-11 13:59:27.378098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.562 [2024-12-11 13:59:27.378167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.562 [2024-12-11 13:59:27.378182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.562 [2024-12-11 13:59:27.395256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.562 [2024-12-11 13:59:27.395299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.562 [2024-12-11 13:59:27.395313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.563 [2024-12-11 13:59:27.412322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.563 [2024-12-11 13:59:27.412360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.563 [2024-12-11 13:59:27.412389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.563 [2024-12-11 13:59:27.429404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.563 [2024-12-11 13:59:27.429581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.563 [2024-12-11 13:59:27.429599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.563 [2024-12-11 13:59:27.446551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.563 [2024-12-11 13:59:27.446590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.563 [2024-12-11 13:59:27.446604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.563 [2024-12-11 13:59:27.463337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.563 [2024-12-11 13:59:27.463524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.563 [2024-12-11 13:59:27.463541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.563 [2024-12-11 13:59:27.480676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.563 [2024-12-11 13:59:27.480741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.563 [2024-12-11 13:59:27.480755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.563 [2024-12-11 13:59:27.497944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.563 [2024-12-11 13:59:27.497982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.563 [2024-12-11 13:59:27.498011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.563 [2024-12-11 13:59:27.515314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.563 [2024-12-11 13:59:27.515476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.563 [2024-12-11 13:59:27.515495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.563 [2024-12-11 13:59:27.532996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.563 [2024-12-11 13:59:27.533033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.563 [2024-12-11 13:59:27.533046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.563 [2024-12-11 13:59:27.550476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.563 [2024-12-11 13:59:27.550516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.563 [2024-12-11 13:59:27.550546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.563 [2024-12-11 13:59:27.567062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.563 [2024-12-11 13:59:27.567285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.563 [2024-12-11 13:59:27.567303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.563 [2024-12-11 13:59:27.583799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.563 [2024-12-11 13:59:27.583839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.563 [2024-12-11 13:59:27.583871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.563 [2024-12-11 13:59:27.600616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.563 [2024-12-11 13:59:27.600655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.563 [2024-12-11 13:59:27.600684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.821 [2024-12-11 13:59:27.617957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.821 [2024-12-11 13:59:27.618011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.821 [2024-12-11 13:59:27.618032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.821 [2024-12-11 13:59:27.634829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.821 [2024-12-11 13:59:27.635015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.821 [2024-12-11 13:59:27.635033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.821 [2024-12-11 13:59:27.651530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.821 [2024-12-11 13:59:27.651575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.821 [2024-12-11 13:59:27.651605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.821 [2024-12-11 13:59:27.668026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.821 [2024-12-11 13:59:27.668062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.821 [2024-12-11 13:59:27.668091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.821 [2024-12-11 13:59:27.684687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.821 [2024-12-11 13:59:27.684732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.821 [2024-12-11 13:59:27.684762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.821 [2024-12-11 13:59:27.701016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.821 [2024-12-11 13:59:27.701052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.821 [2024-12-11 13:59:27.701081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.821 [2024-12-11 13:59:27.717747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.821 [2024-12-11 13:59:27.718077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.821 [2024-12-11 13:59:27.718098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.821 [2024-12-11 13:59:27.735763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.821 [2024-12-11 13:59:27.735843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.821 [2024-12-11 13:59:27.735857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.821 [2024-12-11 13:59:27.753226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.821 [2024-12-11 13:59:27.753276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.821 [2024-12-11 13:59:27.753307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.821 [2024-12-11 13:59:27.770325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.821 [2024-12-11 13:59:27.770365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.821 [2024-12-11 13:59:27.770378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.821 [2024-12-11 13:59:27.787463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.821 [2024-12-11 13:59:27.787517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.821 [2024-12-11 13:59:27.787533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.821 [2024-12-11 13:59:27.804882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.821 [2024-12-11 13:59:27.804916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.821 [2024-12-11 13:59:27.804946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.821 [2024-12-11 13:59:27.822327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.822 [2024-12-11 13:59:27.822364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.822 [2024-12-11 13:59:27.822377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.822 [2024-12-11 13:59:27.839272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.822 [2024-12-11 13:59:27.839431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.822 [2024-12-11 13:59:27.839449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.822 [2024-12-11 13:59:27.856325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:34.822 [2024-12-11 13:59:27.856497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.822 [2024-12-11 13:59:27.856711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.081 [2024-12-11 13:59:27.874138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.081 [2024-12-11 13:59:27.874368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.081 [2024-12-11 13:59:27.874553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.081 [2024-12-11 13:59:27.891765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.081 [2024-12-11 13:59:27.891947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.081 [2024-12-11 13:59:27.892103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.081 [2024-12-11 13:59:27.909175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.081 [2024-12-11 13:59:27.909544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.081 [2024-12-11 13:59:27.909697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.081 [2024-12-11 13:59:27.926849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.081 [2024-12-11 13:59:27.927222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.081 [2024-12-11 13:59:27.927349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.081 [2024-12-11 13:59:27.944633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.081 [2024-12-11 13:59:27.944822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.081 [2024-12-11 13:59:27.944945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.081 [2024-12-11 13:59:27.962295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.081 [2024-12-11 13:59:27.962469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.081 [2024-12-11 13:59:27.962643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.081 [2024-12-11 13:59:27.979953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.081 [2024-12-11 13:59:27.980114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.081 [2024-12-11 13:59:27.980131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.081 [2024-12-11 13:59:27.997191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.081 [2024-12-11 13:59:27.997348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.081 [2024-12-11 13:59:27.997365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.081 [2024-12-11 13:59:28.014495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.081 [2024-12-11 13:59:28.014545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.081 [2024-12-11 13:59:28.014559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.081 [2024-12-11 13:59:28.031665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.081 [2024-12-11 13:59:28.031730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.081 [2024-12-11 13:59:28.031745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.081 [2024-12-11 13:59:28.048776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.081 [2024-12-11 13:59:28.048815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.081 [2024-12-11 13:59:28.048827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.081 [2024-12-11 13:59:28.066007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.081 [2024-12-11 13:59:28.066209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.081 [2024-12-11 13:59:28.066230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.081 [2024-12-11 13:59:28.083819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.081 [2024-12-11 13:59:28.083867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.081 [2024-12-11 13:59:28.083896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.081 [2024-12-11 13:59:28.100835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.081 [2024-12-11 13:59:28.100880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.081 [2024-12-11 13:59:28.100894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.081 [2024-12-11 13:59:28.117787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.081 [2024-12-11 13:59:28.117824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.081 [2024-12-11 13:59:28.117837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.342 [2024-12-11 13:59:28.134650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.342 [2024-12-11 13:59:28.134900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.342 [2024-12-11 13:59:28.134919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.342 [2024-12-11 13:59:28.152015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.342 [2024-12-11 13:59:28.152081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.342 [2024-12-11 13:59:28.152113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.342 [2024-12-11 13:59:28.170766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.342 [2024-12-11 13:59:28.170848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.342 [2024-12-11 13:59:28.170879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.342 [2024-12-11 13:59:28.187937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.342 [2024-12-11 13:59:28.188005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.342 [2024-12-11 13:59:28.188036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.342 14548.00 IOPS, 56.83 MiB/s [2024-12-11T13:59:28.389Z] [2024-12-11 13:59:28.206317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.342 [2024-12-11 13:59:28.206362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.342 [2024-12-11 13:59:28.206393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.342 [2024-12-11 13:59:28.222972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.342 [2024-12-11 13:59:28.223031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.342 [2024-12-11 13:59:28.223061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.342 [2024-12-11 13:59:28.239708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.342 [2024-12-11 13:59:28.239775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.342 [2024-12-11 13:59:28.239804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.342 [2024-12-11 13:59:28.256288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.342 [2024-12-11 13:59:28.256325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.342 [2024-12-11 13:59:28.256354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.342 [2024-12-11 13:59:28.272840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.342 [2024-12-11 13:59:28.272878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.342 [2024-12-11 13:59:28.272907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.342 [2024-12-11 13:59:28.289303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.342 [2024-12-11 13:59:28.289344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.342 [2024-12-11 13:59:28.289374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.342 [2024-12-11 13:59:28.312688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.342 [2024-12-11 13:59:28.312732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.342 [2024-12-11 13:59:28.312762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.342 [2024-12-11 13:59:28.329256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.342 [2024-12-11 13:59:28.329291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.342 [2024-12-11 13:59:28.329320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.342 [2024-12-11 13:59:28.345653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.342 [2024-12-11 13:59:28.345689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.342 [2024-12-11 13:59:28.345734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.342 [2024-12-11 13:59:28.362050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.342 [2024-12-11 13:59:28.362275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.342 [2024-12-11 13:59:28.362297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.342 [2024-12-11 13:59:28.378932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.342 [2024-12-11 13:59:28.379270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.342 [2024-12-11 13:59:28.379289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.602 [2024-12-11 13:59:28.396679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.602 [2024-12-11 13:59:28.396734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.602 [2024-12-11 13:59:28.396749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.602 [2024-12-11 13:59:28.413872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.602 [2024-12-11 13:59:28.413910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.602 [2024-12-11 13:59:28.413923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.602 [2024-12-11 13:59:28.431152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.602 [2024-12-11 13:59:28.431191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.602 [2024-12-11 13:59:28.431204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.602 [2024-12-11 13:59:28.448166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.602 [2024-12-11 13:59:28.448202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.602 [2024-12-11 13:59:28.448231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.602 [2024-12-11 13:59:28.465156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.602 [2024-12-11 13:59:28.465192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.602 [2024-12-11 13:59:28.465221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.602 [2024-12-11 13:59:28.481916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.602 [2024-12-11 13:59:28.482104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.602 [2024-12-11 13:59:28.482123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.602 [2024-12-11 13:59:28.499271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.602 [2024-12-11 13:59:28.499309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.602 [2024-12-11 13:59:28.499322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.602 [2024-12-11 13:59:28.516683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.602 [2024-12-11 13:59:28.516750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.602 [2024-12-11 13:59:28.516764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.602 [2024-12-11 13:59:28.534082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.602 [2024-12-11 13:59:28.534275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.602 [2024-12-11 13:59:28.534292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.602 [2024-12-11 13:59:28.551459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.602 [2024-12-11 13:59:28.551496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.602 [2024-12-11 13:59:28.551526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.602 [2024-12-11 13:59:28.568741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.602 [2024-12-11 13:59:28.568778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.602 [2024-12-11 13:59:28.568791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.602 [2024-12-11 13:59:28.586108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.602 [2024-12-11 13:59:28.586147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.602 [2024-12-11 13:59:28.586183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.602 [2024-12-11 13:59:28.603035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.602 [2024-12-11 13:59:28.603075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.602 [2024-12-11 13:59:28.603089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.602 [2024-12-11 13:59:28.620119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.602 [2024-12-11 13:59:28.620280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.602 [2024-12-11 13:59:28.620297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.602 [2024-12-11 13:59:28.637270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.602 [2024-12-11 13:59:28.637314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.602 [2024-12-11 13:59:28.637344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.861 [2024-12-11 13:59:28.654330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.861 [2024-12-11 13:59:28.654498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.861 [2024-12-11 13:59:28.654515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.861 [2024-12-11 13:59:28.671654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.861 [2024-12-11 13:59:28.671692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.861 [2024-12-11 13:59:28.671736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.861 [2024-12-11 13:59:28.688641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.861 [2024-12-11 13:59:28.688843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.861 [2024-12-11 13:59:28.688860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.861 [2024-12-11 13:59:28.705997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.861 [2024-12-11 13:59:28.706034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.861 [2024-12-11 13:59:28.706064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.861 [2024-12-11 13:59:28.723315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.861 [2024-12-11 13:59:28.723470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.861 [2024-12-11 13:59:28.723487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.861 [2024-12-11 13:59:28.742949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.861 [2024-12-11 13:59:28.742992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.861 [2024-12-11 13:59:28.743006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.861 [2024-12-11 13:59:28.760188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.861 [2024-12-11 13:59:28.760229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.861 [2024-12-11 13:59:28.760242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.861 [2024-12-11 13:59:28.777106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.861 [2024-12-11 13:59:28.777308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.861 [2024-12-11 13:59:28.777325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.861 [2024-12-11 13:59:28.794517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.861 [2024-12-11 13:59:28.794557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.861 [2024-12-11 13:59:28.794570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.861 [2024-12-11 13:59:28.811875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.861 [2024-12-11 13:59:28.811911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.861 [2024-12-11 13:59:28.811940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.861 [2024-12-11 13:59:28.829171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.861 [2024-12-11 13:59:28.829363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.861 [2024-12-11 13:59:28.829380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.861 [2024-12-11 13:59:28.846106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.861 [2024-12-11 13:59:28.846147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.862 [2024-12-11 13:59:28.846176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.862 [2024-12-11 13:59:28.862623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.862 [2024-12-11 13:59:28.862663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.862 [2024-12-11 13:59:28.862692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.862 [2024-12-11 13:59:28.879255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.862 [2024-12-11 13:59:28.879409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.862 [2024-12-11 13:59:28.879426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.862 [2024-12-11 13:59:28.896004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:35.862 [2024-12-11 13:59:28.896205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.862 [2024-12-11 13:59:28.896416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.130 [2024-12-11 13:59:28.913105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:36.130 [2024-12-11 13:59:28.913311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.130 [2024-12-11 13:59:28.913452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.130 [2024-12-11 13:59:28.930086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:36.130 [2024-12-11 13:59:28.930297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.130 [2024-12-11 13:59:28.930421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.130 [2024-12-11 13:59:28.947329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:36.130 [2024-12-11 13:59:28.947504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.130 [2024-12-11 13:59:28.947627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.130 [2024-12-11 13:59:28.964634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:36.130 [2024-12-11 13:59:28.964855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.130 [2024-12-11 13:59:28.964991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.130 [2024-12-11 13:59:28.981703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:36.130 [2024-12-11 13:59:28.981915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.130 [2024-12-11 13:59:28.982055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.130 [2024-12-11 13:59:28.998916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:36.130 [2024-12-11 13:59:28.999130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.130 [2024-12-11 13:59:28.999254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.130 [2024-12-11 13:59:29.016296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:36.130 [2024-12-11 13:59:29.016488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.130 [2024-12-11 13:59:29.016610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.130 [2024-12-11 13:59:29.034017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:36.130 [2024-12-11 13:59:29.034214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.130 [2024-12-11 13:59:29.034232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.130 [2024-12-11 13:59:29.051612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:36.130 [2024-12-11 13:59:29.051654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.130 [2024-12-11 13:59:29.051684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.130 [2024-12-11 13:59:29.068896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:36.130 [2024-12-11 13:59:29.069050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.131 [2024-12-11 13:59:29.069066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.131 [2024-12-11 13:59:29.086405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:36.131 [2024-12-11 13:59:29.086443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.131 [2024-12-11 13:59:29.086457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.131 [2024-12-11 13:59:29.103620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:36.131 [2024-12-11 13:59:29.103657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.131 [2024-12-11 13:59:29.103686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.131 [2024-12-11 13:59:29.120385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:36.131 [2024-12-11 13:59:29.120422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.131 [2024-12-11 13:59:29.120451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.131 [2024-12-11 13:59:29.137544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:36.131 [2024-12-11 13:59:29.137591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.131 [2024-12-11 13:59:29.137604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.131 [2024-12-11 13:59:29.154718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:36.131 [2024-12-11 13:59:29.154771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.131 [2024-12-11 13:59:29.154801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.400 [2024-12-11 13:59:29.171859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:36.400 [2024-12-11 13:59:29.171897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.400 [2024-12-11 13:59:29.171911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.400 [2024-12-11 13:59:29.189137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1678950) 00:18:36.400 [2024-12-11 13:59:29.189338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.400 [2024-12-11 13:59:29.189354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.400 14674.50 IOPS, 57.32 MiB/s 00:18:36.400 Latency(us) 00:18:36.400 [2024-12-11T13:59:29.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.400 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:36.400 nvme0n1 : 2.01 14693.18 57.40 0.00 0.00 8705.31 7983.48 31695.59 00:18:36.400 [2024-12-11T13:59:29.447Z] =================================================================================================================== 00:18:36.400 [2024-12-11T13:59:29.447Z] Total : 14693.18 57.40 0.00 0.00 8705.31 7983.48 31695.59 00:18:36.400 { 00:18:36.400 "results": [ 00:18:36.400 { 00:18:36.400 "job": "nvme0n1", 00:18:36.400 "core_mask": "0x2", 00:18:36.400 "workload": "randread", 00:18:36.400 "status": "finished", 00:18:36.400 "queue_depth": 128, 00:18:36.400 "io_size": 4096, 00:18:36.400 "runtime": 2.006169, 00:18:36.400 "iops": 14693.178889714674, 00:18:36.400 "mibps": 57.39523003794795, 00:18:36.400 "io_failed": 0, 00:18:36.400 "io_timeout": 0, 00:18:36.400 "avg_latency_us": 8705.305742350738, 00:18:36.400 "min_latency_us": 7983.476363636363, 00:18:36.400 "max_latency_us": 31695.592727272728 00:18:36.400 } 00:18:36.400 ], 00:18:36.400 "core_count": 1 00:18:36.400 } 00:18:36.400 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:36.400 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:36.400 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:36.401 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:36.401 | .driver_specific 00:18:36.401 | .nvme_error 00:18:36.401 | .status_code 00:18:36.401 | .command_transient_transport_error' 00:18:36.660 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 115 > 0 )) 00:18:36.660 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 81622 00:18:36.660 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 81622 ']' 00:18:36.660 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 81622 00:18:36.660 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:36.660 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.660 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81622 00:18:36.660 killing process with pid 81622 00:18:36.660 Received shutdown signal, test time was about 2.000000 seconds 00:18:36.660 00:18:36.660 Latency(us) 00:18:36.660 [2024-12-11T13:59:29.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.660 [2024-12-11T13:59:29.707Z] =================================================================================================================== 00:18:36.660 [2024-12-11T13:59:29.707Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.660 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:36.660 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:36.660 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81622' 00:18:36.660 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 81622 00:18:36.660 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 81622 00:18:36.919 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:36.919 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:36.919 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:36.919 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:36.919 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:36.919 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=81675 00:18:36.919 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 81675 /var/tmp/bperf.sock 00:18:36.919 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:36.919 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 81675 ']' 00:18:36.919 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:36.919 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.919 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:36.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:36.919 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.919 13:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:36.919 [2024-12-11 13:59:29.819215] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:36.920 [2024-12-11 13:59:29.819504] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81675 ] 00:18:36.920 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:36.920 Zero copy mechanism will not be used. 00:18:37.178 [2024-12-11 13:59:29.968796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.178 [2024-12-11 13:59:30.031678] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.178 [2024-12-11 13:59:30.088028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:37.178 13:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.178 13:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:37.178 13:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:37.178 13:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:37.437 13:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:37.437 13:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.437 13:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:37.437 13:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.437 13:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:37.437 13:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:38.006 nvme0n1 00:18:38.006 13:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:38.006 13:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.006 13:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:38.006 13:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.006 13:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:38.006 13:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:38.006 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:38.006 Zero copy mechanism will not be used. 00:18:38.006 Running I/O for 2 seconds... 00:18:38.006 [2024-12-11 13:59:30.918131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.006 [2024-12-11 13:59:30.918192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.006 [2024-12-11 13:59:30.918209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.006 [2024-12-11 13:59:30.922566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.006 [2024-12-11 13:59:30.922815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.006 [2024-12-11 13:59:30.922833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.006 [2024-12-11 13:59:30.927119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.006 [2024-12-11 13:59:30.927159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.006 [2024-12-11 13:59:30.927173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.006 [2024-12-11 13:59:30.931414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.006 [2024-12-11 13:59:30.931483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:30.931513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:30.935770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:30.935808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:30.935837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:30.940095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:30.940133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:30.940162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:30.944509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:30.944547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:30.944577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:30.948834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:30.948871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:30.948900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:30.953117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:30.953155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:30.953183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:30.957386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:30.957424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:30.957453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:30.961585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:30.961622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:30.961651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:30.965878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:30.965918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:30.965948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:30.970026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:30.970065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:30.970079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:30.974208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:30.974246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:30.974260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:30.978529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:30.978568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:30.978581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:30.982813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:30.982849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:30.982878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:30.987055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:30.987092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:30.987147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:30.991319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:30.991359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:30.991373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:30.995654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:30.995693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:30.995738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:31.000047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:31.000098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:31.000128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:31.004443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:31.004483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:31.004497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:31.008735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:31.008771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:31.008784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:31.013053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:31.013088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:31.013101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:31.017367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:31.017516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:31.017620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:31.021865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:31.021992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:31.022109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:31.026456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:31.026629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:31.026734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:31.031118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:31.031157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:31.031171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:31.035605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:31.035769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:31.035859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:31.040295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:31.040436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:31.040532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:31.044821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:31.044944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:31.045023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.007 [2024-12-11 13:59:31.049333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.007 [2024-12-11 13:59:31.049453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.007 [2024-12-11 13:59:31.049563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.267 [2024-12-11 13:59:31.053763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.267 [2024-12-11 13:59:31.053882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.267 [2024-12-11 13:59:31.053976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.267 [2024-12-11 13:59:31.058261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.267 [2024-12-11 13:59:31.058383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.267 [2024-12-11 13:59:31.058461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.267 [2024-12-11 13:59:31.062691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.267 [2024-12-11 13:59:31.062813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.267 [2024-12-11 13:59:31.062912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.267 [2024-12-11 13:59:31.067225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.267 [2024-12-11 13:59:31.067350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.267 [2024-12-11 13:59:31.067437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.267 [2024-12-11 13:59:31.071801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.267 [2024-12-11 13:59:31.071934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.267 [2024-12-11 13:59:31.072016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.267 [2024-12-11 13:59:31.076358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.267 [2024-12-11 13:59:31.076494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.267 [2024-12-11 13:59:31.076576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.267 [2024-12-11 13:59:31.080910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.267 [2024-12-11 13:59:31.081037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.267 [2024-12-11 13:59:31.081145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.267 [2024-12-11 13:59:31.085400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.267 [2024-12-11 13:59:31.085523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.267 [2024-12-11 13:59:31.085631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.267 [2024-12-11 13:59:31.089944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.089986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.089999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.094072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.094197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.094285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.098648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.098801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.098884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.103212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.103336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.103410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.107750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.107863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.107953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.112186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.112318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.112398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.116853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.116975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.117063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.121361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.121483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.121574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.126013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.126153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.126235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.130540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.130678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.130812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.135150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.135277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.135360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.139795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.139908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.139996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.144358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.144484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.144564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.148882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.149009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.149115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.153447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.153554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.153647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.157957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.158095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.158174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.162480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.162582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.162692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.167053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.167199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.167280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.171511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.171690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.171809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.176172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.176308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.176394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.180695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.180825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.180922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.185215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.185331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.185413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.189746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.189880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.189957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.194415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.194534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.194609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.198961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.199115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.199206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.203492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.203615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.203710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.207977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.208094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.208171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.212439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.268 [2024-12-11 13:59:31.212562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.268 [2024-12-11 13:59:31.212650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.268 [2024-12-11 13:59:31.217049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.217184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.217206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.269 [2024-12-11 13:59:31.221434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.221551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.221631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.269 [2024-12-11 13:59:31.226027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.226146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.226242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.269 [2024-12-11 13:59:31.230462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.230580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.230689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.269 [2024-12-11 13:59:31.235051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.235199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.235284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.269 [2024-12-11 13:59:31.239605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.239746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.239838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.269 [2024-12-11 13:59:31.244075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.244205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.244282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.269 [2024-12-11 13:59:31.248520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.248669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.248784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.269 [2024-12-11 13:59:31.253078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.253222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.253311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.269 [2024-12-11 13:59:31.257705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.257830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.257926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.269 [2024-12-11 13:59:31.262296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.262430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.262522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.269 [2024-12-11 13:59:31.266889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.267005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.267129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.269 [2024-12-11 13:59:31.271473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.271587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.271682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.269 [2024-12-11 13:59:31.275955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.276070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.276159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.269 [2024-12-11 13:59:31.280506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.280657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.280767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.269 [2024-12-11 13:59:31.285176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.285305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.285420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.269 [2024-12-11 13:59:31.289698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.289803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.289918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.269 [2024-12-11 13:59:31.294210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.294329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.294426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.269 [2024-12-11 13:59:31.298626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.298779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.298864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.269 [2024-12-11 13:59:31.303160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.303278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.303354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.269 [2024-12-11 13:59:31.307562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.307694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.307798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.269 [2024-12-11 13:59:31.312451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.269 [2024-12-11 13:59:31.312657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.269 [2024-12-11 13:59:31.312794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.529 [2024-12-11 13:59:31.318254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.529 [2024-12-11 13:59:31.318414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.529 [2024-12-11 13:59:31.318538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.529 [2024-12-11 13:59:31.323069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.529 [2024-12-11 13:59:31.323199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.529 [2024-12-11 13:59:31.323221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.529 [2024-12-11 13:59:31.327434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.529 [2024-12-11 13:59:31.327506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.529 [2024-12-11 13:59:31.327550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.529 [2024-12-11 13:59:31.331764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.529 [2024-12-11 13:59:31.331819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.529 [2024-12-11 13:59:31.331848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.529 [2024-12-11 13:59:31.336053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.529 [2024-12-11 13:59:31.336107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.529 [2024-12-11 13:59:31.336136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.529 [2024-12-11 13:59:31.340367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.529 [2024-12-11 13:59:31.340437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.340465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.344764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.344827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.344856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.349173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.349227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.349255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.353451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.353507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.353536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.357704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.357766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.357795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.361958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.362014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.362027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.366116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.366156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.366169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.370392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.370447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.370476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.374625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.374679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.374707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.378812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.378862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.378890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.383133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.383178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.383191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.387369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.387409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.387422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.391795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.391865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.391894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.396066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.396120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.396148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.400235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.400289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.400318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.404517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.404572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.404615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.408854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.408908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.408936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.413101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.413157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.413186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.417409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.417462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.417490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.421606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.421660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.421688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.425915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.425968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.425997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.430198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.430238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.430251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.434510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.434564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.434593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.438840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.438892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.438919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.443001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.443052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.443080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.447231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.447270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.447282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.451435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.451515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.451543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.455857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.530 [2024-12-11 13:59:31.455896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.530 [2024-12-11 13:59:31.455925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.530 [2024-12-11 13:59:31.459991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.460044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.460072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.464285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.464339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.464368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.468654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.468733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.468747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.472991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.473044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.473073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.477456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.477509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.477553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.481741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.481794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.481823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.486073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.486113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.486126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.490300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.490340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.490353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.494646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.494726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.494756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.499030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.499083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.499121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.503275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.503316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.503329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.507569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.507624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.507638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.511994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.512065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.512093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.516294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.516335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.516348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.520524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.520565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.520578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.524738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.524777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.524790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.528862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.528901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.528914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.533217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.533257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.533269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.537437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.537477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.537490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.541659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.541712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.541726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.545843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.545896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.545924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.550219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.550259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.550272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.554459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.554513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.554542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.558814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.558865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.558894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.563022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.563073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.563111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.567301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.567351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.567365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.531 [2024-12-11 13:59:31.571588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.531 [2024-12-11 13:59:31.571627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.531 [2024-12-11 13:59:31.571639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.792 [2024-12-11 13:59:31.575847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.792 [2024-12-11 13:59:31.575886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.792 [2024-12-11 13:59:31.575899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.792 [2024-12-11 13:59:31.580096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.792 [2024-12-11 13:59:31.580136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.792 [2024-12-11 13:59:31.580150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.792 [2024-12-11 13:59:31.584325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.792 [2024-12-11 13:59:31.584366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.792 [2024-12-11 13:59:31.584378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.792 [2024-12-11 13:59:31.588627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.792 [2024-12-11 13:59:31.588681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.792 [2024-12-11 13:59:31.588710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.792 [2024-12-11 13:59:31.592906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.792 [2024-12-11 13:59:31.592959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.792 [2024-12-11 13:59:31.592988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.792 [2024-12-11 13:59:31.597311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.792 [2024-12-11 13:59:31.597365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.792 [2024-12-11 13:59:31.597378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.792 [2024-12-11 13:59:31.601620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.792 [2024-12-11 13:59:31.601660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.792 [2024-12-11 13:59:31.601673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.792 [2024-12-11 13:59:31.605880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.792 [2024-12-11 13:59:31.605917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.792 [2024-12-11 13:59:31.605930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.792 [2024-12-11 13:59:31.610057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.792 [2024-12-11 13:59:31.610093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.792 [2024-12-11 13:59:31.610106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.792 [2024-12-11 13:59:31.614329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.792 [2024-12-11 13:59:31.614366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.792 [2024-12-11 13:59:31.614379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.792 [2024-12-11 13:59:31.618539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.792 [2024-12-11 13:59:31.618591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.618604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.622851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.622888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.622901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.627156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.627193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.627206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.631403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.631443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.631455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.635642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.635683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.635696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.639867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.639905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.639919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.644285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.644338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.644367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.648686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.648738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.648752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.652998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.653051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.653080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.657458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.657513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.657542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.661812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.661866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.661879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.666129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.666168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.666182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.670552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.670607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.670620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.674867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.674919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.674947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.679221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.679260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.679273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.683592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.683638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.683651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.687729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.687765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.687778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.692022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.692060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.692090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.696320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.696374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.696402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.700781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.700835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.700864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.705141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.705194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.705223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.709460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.709512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.709530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.713752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.713803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.713832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.717967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.718037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.718050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.722192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.722233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.722246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.726455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.726509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.726554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.730967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.731019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.731047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.735256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.793 [2024-12-11 13:59:31.735297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.793 [2024-12-11 13:59:31.735309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.793 [2024-12-11 13:59:31.739558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.739612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.739641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.744170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.744240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.744253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.748443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.748496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.748525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.752712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.752775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.752804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.757100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.757153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.757182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.761410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.761461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.761474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.765679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.765741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.765770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.769864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.769917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.769946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.774167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.774206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.774219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.778509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.778560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.778589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.782819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.782872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.782901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.786980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.787029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.787057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.791153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.791191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.791203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.795539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.795592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.795620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.799949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.800001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.800029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.804201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.804255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.804283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.808405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.808461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.808474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.812767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.812831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.812845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.817132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.817187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.817200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.821464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.821519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.821547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.826046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.826085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.826097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.830284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.830323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.830337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:38.794 [2024-12-11 13:59:31.834553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:38.794 [2024-12-11 13:59:31.834605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.794 [2024-12-11 13:59:31.834633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.054 [2024-12-11 13:59:31.838690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.054 [2024-12-11 13:59:31.838749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.054 [2024-12-11 13:59:31.838779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.054 [2024-12-11 13:59:31.842880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.054 [2024-12-11 13:59:31.842930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.054 [2024-12-11 13:59:31.842958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.054 [2024-12-11 13:59:31.847065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.847141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.847155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.851590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.851656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.851690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.856741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.856804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.856838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.861555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.861611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.861640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.865800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.865853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.865881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.869864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.869917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.869945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.873979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.874035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.874048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.878188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.878227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.878240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.882361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.882431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.882459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.886698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.886763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.886794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.891383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.891423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.891436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.895729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.895776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.895789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.900009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.900065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.900094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.904310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.904364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.904393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.908507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.908562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.908607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.914358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.914442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.914470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.055 7037.00 IOPS, 879.62 MiB/s [2024-12-11T13:59:32.102Z] [2024-12-11 13:59:31.918801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.918840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.918869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.923132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.923174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.923188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.927572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.927626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.927650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.931953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.932009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.932038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.936192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.936248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.936277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.940412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.940468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.940497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.944740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.944803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.944833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.949047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.949101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.949130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.953383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.953437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.953465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.957663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.957740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.957754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.961877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.055 [2024-12-11 13:59:31.961959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.055 [2024-12-11 13:59:31.961988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.055 [2024-12-11 13:59:31.966144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:31.966183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:31.966196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:31.970438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:31.970490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:31.970518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:31.974806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:31.974858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:31.974886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:31.979049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:31.979107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:31.979137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:31.983425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:31.983510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:31.983553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:31.987795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:31.987846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:31.987875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:31.992136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:31.992206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:31.992235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:31.996530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:31.996585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:31.996598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:32.000941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:32.000996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:32.001024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:32.005203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:32.005258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:32.005287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:32.009358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:32.009412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:32.009440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:32.013500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:32.013554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:32.013583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:32.017943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:32.017982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:32.017995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:32.022148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:32.022187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:32.022201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:32.026305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:32.026344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:32.026356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:32.030434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:32.030489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:32.030502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:32.034832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:32.034888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:32.034901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:32.039187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:32.039226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:32.039239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:32.043434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:32.043475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:32.043488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:32.047674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:32.047735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:32.047749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:32.052083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:32.052137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:32.052166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:32.056480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:32.056536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:32.056549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:32.060950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:32.060989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:32.061002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:32.065275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:32.065330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:32.065359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:32.069609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:32.069663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:32.069691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:32.073855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:32.073909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:32.073938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:32.078100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:32.078137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:32.078150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:32.082367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.056 [2024-12-11 13:59:32.082422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.056 [2024-12-11 13:59:32.082450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.056 [2024-12-11 13:59:32.086579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.057 [2024-12-11 13:59:32.086632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.057 [2024-12-11 13:59:32.086660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.057 [2024-12-11 13:59:32.090766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.057 [2024-12-11 13:59:32.090819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.057 [2024-12-11 13:59:32.090848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.057 [2024-12-11 13:59:32.094988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.057 [2024-12-11 13:59:32.095040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.057 [2024-12-11 13:59:32.095068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.057 [2024-12-11 13:59:32.099005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.057 [2024-12-11 13:59:32.099056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.057 [2024-12-11 13:59:32.099084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.316 [2024-12-11 13:59:32.103220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.316 [2024-12-11 13:59:32.103260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.316 [2024-12-11 13:59:32.103273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.316 [2024-12-11 13:59:32.107485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.316 [2024-12-11 13:59:32.107535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.316 [2024-12-11 13:59:32.107564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.316 [2024-12-11 13:59:32.111872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.316 [2024-12-11 13:59:32.111912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.316 [2024-12-11 13:59:32.111925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.316 [2024-12-11 13:59:32.116084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.316 [2024-12-11 13:59:32.116124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.316 [2024-12-11 13:59:32.116137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.316 [2024-12-11 13:59:32.120331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.316 [2024-12-11 13:59:32.120371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.316 [2024-12-11 13:59:32.120384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.316 [2024-12-11 13:59:32.124608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.316 [2024-12-11 13:59:32.124649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.316 [2024-12-11 13:59:32.124661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.316 [2024-12-11 13:59:32.128875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.316 [2024-12-11 13:59:32.128915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.316 [2024-12-11 13:59:32.128927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.316 [2024-12-11 13:59:32.133145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.316 [2024-12-11 13:59:32.133185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.133198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.137549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.137589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.137603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.141863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.141902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.141915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.146135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.146176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.146189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.150372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.150412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.150424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.154792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.154845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.154874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.159104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.159148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.159160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.163324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.163365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.163377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.167675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.167738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.167752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.172121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.172177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.172206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.176437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.176494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.176507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.180696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.180757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.180770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.184957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.185015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.185044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.189219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.189275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.189288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.193449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.193505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.193519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.197687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.197751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.197780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.202037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.202076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.202089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.206288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.206328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.206341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.210587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.210642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.210671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.214932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.214986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.215015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.219242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.219283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.219296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.223580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.223635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.223664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.227994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.228048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.228077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.232281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.232336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.232366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.236667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.236718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.236732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.240890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.240947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.240976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.245274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.245334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.245347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.249806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.317 [2024-12-11 13:59:32.249862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.317 [2024-12-11 13:59:32.249875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.317 [2024-12-11 13:59:32.254125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.254172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.254186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.258376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.258415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.258428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.262868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.262908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.262921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.267311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.267350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.267364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.271641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.271691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.271731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.275982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.276036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.276064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.280435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.280491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.280504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.284835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.284889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.284918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.289113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.289168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.289197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.293376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.293431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.293460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.297643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.297723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.297737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.301929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.301983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.302012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.306095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.306135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.306148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.310231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.310271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.310283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.314507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.314560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.314589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.318770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.318822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.318850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.323043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.323101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.323146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.327524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.327578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.327607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.331882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.331935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.331964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.336105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.336159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.336188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.340371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.340426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.340454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.344823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.344879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.344908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.349133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.349187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.349215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.353394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.353455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.353484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.357661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.357739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.357753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.318 [2024-12-11 13:59:32.361895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.318 [2024-12-11 13:59:32.361951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.318 [2024-12-11 13:59:32.361979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.578 [2024-12-11 13:59:32.366108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.578 [2024-12-11 13:59:32.366148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.578 [2024-12-11 13:59:32.366161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.578 [2024-12-11 13:59:32.370189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.578 [2024-12-11 13:59:32.370229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.578 [2024-12-11 13:59:32.370241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.578 [2024-12-11 13:59:32.374437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.578 [2024-12-11 13:59:32.374494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.578 [2024-12-11 13:59:32.374507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.578 [2024-12-11 13:59:32.378645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.578 [2024-12-11 13:59:32.378724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.578 [2024-12-11 13:59:32.378737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.578 [2024-12-11 13:59:32.382924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.578 [2024-12-11 13:59:32.382974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.578 [2024-12-11 13:59:32.383002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.578 [2024-12-11 13:59:32.387957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.578 [2024-12-11 13:59:32.388024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.578 [2024-12-11 13:59:32.388039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.392344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.392388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.392401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.396543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.396614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.396643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.400895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.400949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.400979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.405207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.405261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.405290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.409460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.409515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.409544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.413860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.413913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.413943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.418121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.418162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.418175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.422308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.422378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.422407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.426663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.426742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.426757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.430960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.431012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.431041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.435223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.435263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.435276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.439624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.439676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.439690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.444005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.444061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.444074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.448251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.448307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.448321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.452474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.452530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.452543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.456787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.456841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.456870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.461139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.461194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.461224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.465494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.465550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.465580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.469845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.469898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.469927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.474112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.474154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.474166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.478352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.478422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.478451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.482676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.482741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.482771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.487047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.487125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.487139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.491371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.491410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.491422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.495713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.495778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.495791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.499987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.500042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.500072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.504295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.504350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.579 [2024-12-11 13:59:32.504363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.579 [2024-12-11 13:59:32.508564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.579 [2024-12-11 13:59:32.508633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.508663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.512885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.512940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.512969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.517157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.517211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.517241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.521439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.521494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.521524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.525770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.525825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.525855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.530020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.530076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.530089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.534258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.534298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.534311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.538466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.538521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.538551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.542913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.542966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.542995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.547159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.547199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.547211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.551352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.551393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.551405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.555640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.555725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.555740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.559875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.559929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.559958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.564135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.564192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.564205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.568433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.568489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.568502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.572773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.572828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.572858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.577080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.577136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.577165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.581409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.581464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.581493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.585752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.585806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.585835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.590098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.590153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.590183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.594359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.594415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.594443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.598863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.598916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.598945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.603056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.603133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.603147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.607314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.607355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.607368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.611585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.611638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.611667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.615910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.615963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.615992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.580 [2024-12-11 13:59:32.620336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.580 [2024-12-11 13:59:32.620388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.580 [2024-12-11 13:59:32.620418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.840 [2024-12-11 13:59:32.624675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.840 [2024-12-11 13:59:32.624741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.840 [2024-12-11 13:59:32.624771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.840 [2024-12-11 13:59:32.629125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.840 [2024-12-11 13:59:32.629182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.840 [2024-12-11 13:59:32.629195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.840 [2024-12-11 13:59:32.633464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.840 [2024-12-11 13:59:32.633503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.840 [2024-12-11 13:59:32.633516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.840 [2024-12-11 13:59:32.637924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.840 [2024-12-11 13:59:32.637962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.840 [2024-12-11 13:59:32.637975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.840 [2024-12-11 13:59:32.642308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.840 [2024-12-11 13:59:32.642362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.840 [2024-12-11 13:59:32.642375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.840 [2024-12-11 13:59:32.646820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.840 [2024-12-11 13:59:32.646861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.840 [2024-12-11 13:59:32.646874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.840 [2024-12-11 13:59:32.651149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.840 [2024-12-11 13:59:32.651190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.840 [2024-12-11 13:59:32.651202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.840 [2024-12-11 13:59:32.655524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.840 [2024-12-11 13:59:32.655573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.840 [2024-12-11 13:59:32.655602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.840 [2024-12-11 13:59:32.659962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.840 [2024-12-11 13:59:32.660000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.840 [2024-12-11 13:59:32.660014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.840 [2024-12-11 13:59:32.664329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.840 [2024-12-11 13:59:32.664383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.840 [2024-12-11 13:59:32.664412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.840 [2024-12-11 13:59:32.668752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.840 [2024-12-11 13:59:32.668817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.840 [2024-12-11 13:59:32.668831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.840 [2024-12-11 13:59:32.673337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.840 [2024-12-11 13:59:32.673394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.840 [2024-12-11 13:59:32.673423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.840 [2024-12-11 13:59:32.677572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.840 [2024-12-11 13:59:32.677643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.840 [2024-12-11 13:59:32.677672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.840 [2024-12-11 13:59:32.682011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.840 [2024-12-11 13:59:32.682066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.840 [2024-12-11 13:59:32.682095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.840 [2024-12-11 13:59:32.686358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.840 [2024-12-11 13:59:32.686398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.840 [2024-12-11 13:59:32.686411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.840 [2024-12-11 13:59:32.690591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.840 [2024-12-11 13:59:32.690631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.690644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.695005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.695060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.695074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.699308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.699347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.699361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.703536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.703591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.703604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.707832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.707871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.707885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.712060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.712101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.712114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.716304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.716345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.716358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.720542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.720582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.720595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.724822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.724863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.724877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.729094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.729136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.729149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.733356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.733395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.733408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.737573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.737628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.737641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.741867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.741906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.741919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.746150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.746191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.746214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.750507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.750546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.750560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.754837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.754892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.754905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.759144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.759187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.759200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.763415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.763487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.763500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.767725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.767789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.767802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.772018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.772060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.772073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.776289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.776346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.776359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.780605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.780646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.780659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.784873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.784927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.784940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.789165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.789206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.789219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.793438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.793478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.793490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.797635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.797691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.797716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.801975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.802016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.802030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.806314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.841 [2024-12-11 13:59:32.806355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.841 [2024-12-11 13:59:32.806368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.841 [2024-12-11 13:59:32.810595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.842 [2024-12-11 13:59:32.810652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.842 [2024-12-11 13:59:32.810666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.842 [2024-12-11 13:59:32.814845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.842 [2024-12-11 13:59:32.814899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.842 [2024-12-11 13:59:32.814912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.842 [2024-12-11 13:59:32.819080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.842 [2024-12-11 13:59:32.819144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.842 [2024-12-11 13:59:32.819158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.842 [2024-12-11 13:59:32.823341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.842 [2024-12-11 13:59:32.823380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.842 [2024-12-11 13:59:32.823392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.842 [2024-12-11 13:59:32.827609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.842 [2024-12-11 13:59:32.827650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.842 [2024-12-11 13:59:32.827663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.842 [2024-12-11 13:59:32.831861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.842 [2024-12-11 13:59:32.831900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.842 [2024-12-11 13:59:32.831913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.842 [2024-12-11 13:59:32.836077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.842 [2024-12-11 13:59:32.836117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.842 [2024-12-11 13:59:32.836130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.842 [2024-12-11 13:59:32.840332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.842 [2024-12-11 13:59:32.840373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.842 [2024-12-11 13:59:32.840386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.842 [2024-12-11 13:59:32.844652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.842 [2024-12-11 13:59:32.844693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.842 [2024-12-11 13:59:32.844719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.842 [2024-12-11 13:59:32.848886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.842 [2024-12-11 13:59:32.848925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.842 [2024-12-11 13:59:32.848938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.842 [2024-12-11 13:59:32.853136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.842 [2024-12-11 13:59:32.853193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.842 [2024-12-11 13:59:32.853205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.842 [2024-12-11 13:59:32.857268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.842 [2024-12-11 13:59:32.857323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.842 [2024-12-11 13:59:32.857336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.842 [2024-12-11 13:59:32.861497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.842 [2024-12-11 13:59:32.861536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.842 [2024-12-11 13:59:32.861549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.842 [2024-12-11 13:59:32.865788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.842 [2024-12-11 13:59:32.865823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.842 [2024-12-11 13:59:32.865836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.842 [2024-12-11 13:59:32.870012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.842 [2024-12-11 13:59:32.870053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.842 [2024-12-11 13:59:32.870065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.842 [2024-12-11 13:59:32.874218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.842 [2024-12-11 13:59:32.874258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.842 [2024-12-11 13:59:32.874270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.842 [2024-12-11 13:59:32.878487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.842 [2024-12-11 13:59:32.878527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.842 [2024-12-11 13:59:32.878540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.842 [2024-12-11 13:59:32.882849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:39.842 [2024-12-11 13:59:32.882904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.842 [2024-12-11 13:59:32.882917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.101 [2024-12-11 13:59:32.887138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:40.101 [2024-12-11 13:59:32.887178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.101 [2024-12-11 13:59:32.887191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.101 [2024-12-11 13:59:32.891440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:40.101 [2024-12-11 13:59:32.891479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.101 [2024-12-11 13:59:32.891492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.101 [2024-12-11 13:59:32.895777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:40.101 [2024-12-11 13:59:32.895833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.101 [2024-12-11 13:59:32.895846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.101 [2024-12-11 13:59:32.900064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:40.101 [2024-12-11 13:59:32.900120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.101 [2024-12-11 13:59:32.900134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.101 [2024-12-11 13:59:32.904324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:40.101 [2024-12-11 13:59:32.904380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.101 [2024-12-11 13:59:32.904394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.101 [2024-12-11 13:59:32.908573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:40.101 [2024-12-11 13:59:32.908629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.101 [2024-12-11 13:59:32.908658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.101 [2024-12-11 13:59:32.914475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfed800) 00:18:40.101 [2024-12-11 13:59:32.914532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.101 [2024-12-11 13:59:32.914546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.101 7114.50 IOPS, 889.31 MiB/s 00:18:40.101 Latency(us) 00:18:40.101 [2024-12-11T13:59:33.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.101 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:40.101 nvme0n1 : 2.00 7115.64 889.46 0.00 0.00 2245.12 1906.50 7864.32 00:18:40.101 [2024-12-11T13:59:33.148Z] =================================================================================================================== 00:18:40.101 [2024-12-11T13:59:33.148Z] Total : 7115.64 889.46 0.00 0.00 2245.12 1906.50 7864.32 00:18:40.101 { 00:18:40.101 "results": [ 00:18:40.101 { 00:18:40.101 "job": "nvme0n1", 00:18:40.101 "core_mask": "0x2", 00:18:40.101 "workload": "randread", 00:18:40.101 "status": "finished", 00:18:40.101 "queue_depth": 16, 00:18:40.101 "io_size": 131072, 00:18:40.101 "runtime": 2.004036, 00:18:40.101 "iops": 7115.640637194142, 00:18:40.101 "mibps": 889.4550796492678, 00:18:40.101 "io_failed": 0, 00:18:40.101 "io_timeout": 0, 00:18:40.101 "avg_latency_us": 2245.118237409155, 00:18:40.101 "min_latency_us": 1906.5018181818182, 00:18:40.101 "max_latency_us": 7864.32 00:18:40.101 } 00:18:40.101 ], 00:18:40.101 "core_count": 1 00:18:40.101 } 00:18:40.101 13:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:40.101 13:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:40.101 13:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:40.101 | .driver_specific 00:18:40.101 | .nvme_error 00:18:40.102 | .status_code 00:18:40.102 | .command_transient_transport_error' 00:18:40.102 13:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:40.360 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 460 > 0 )) 00:18:40.361 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 81675 00:18:40.361 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 81675 ']' 00:18:40.361 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 81675 00:18:40.361 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:40.361 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.361 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81675 00:18:40.361 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:40.361 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:40.361 killing process with pid 81675 00:18:40.361 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81675' 00:18:40.361 Received shutdown signal, test time was about 2.000000 seconds 00:18:40.361 00:18:40.361 Latency(us) 00:18:40.361 [2024-12-11T13:59:33.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.361 [2024-12-11T13:59:33.408Z] =================================================================================================================== 00:18:40.361 [2024-12-11T13:59:33.408Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.361 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 81675 00:18:40.361 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 81675 00:18:40.619 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:40.619 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:40.619 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:40.619 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:40.619 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:40.619 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=81723 00:18:40.619 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:40.619 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 81723 /var/tmp/bperf.sock 00:18:40.619 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 81723 ']' 00:18:40.619 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:40.619 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:40.619 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:40.619 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.619 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:40.619 [2024-12-11 13:59:33.547833] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:40.619 [2024-12-11 13:59:33.547955] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81723 ] 00:18:40.878 [2024-12-11 13:59:33.690505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.878 [2024-12-11 13:59:33.747771] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.878 [2024-12-11 13:59:33.802843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:40.878 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:40.878 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:40.878 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:40.878 13:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:41.136 13:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:41.136 13:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.136 13:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:41.136 13:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.136 13:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:41.136 13:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:41.705 nvme0n1 00:18:41.705 13:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:41.705 13:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.705 13:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:41.705 13:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.705 13:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:41.705 13:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:41.705 Running I/O for 2 seconds... 00:18:41.705 [2024-12-11 13:59:34.627496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef7100 00:18:41.705 [2024-12-11 13:59:34.629150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.705 [2024-12-11 13:59:34.629209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.705 [2024-12-11 13:59:34.644107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef7970 00:18:41.705 [2024-12-11 13:59:34.645806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.705 [2024-12-11 13:59:34.645864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.705 [2024-12-11 13:59:34.660848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef81e0 00:18:41.705 [2024-12-11 13:59:34.662421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.705 [2024-12-11 13:59:34.662473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:41.705 [2024-12-11 13:59:34.677576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef8a50 00:18:41.705 [2024-12-11 13:59:34.679173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.705 [2024-12-11 13:59:34.679212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:41.705 [2024-12-11 13:59:34.694053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef92c0 00:18:41.705 [2024-12-11 13:59:34.695676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.705 [2024-12-11 13:59:34.695720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:41.705 [2024-12-11 13:59:34.710854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef9b30 00:18:41.705 [2024-12-11 13:59:34.712382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.705 [2024-12-11 13:59:34.712419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:41.705 [2024-12-11 13:59:34.727359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016efa3a0 00:18:41.705 [2024-12-11 13:59:34.728856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.705 [2024-12-11 13:59:34.728893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:41.705 [2024-12-11 13:59:34.743829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016efac10 00:18:41.705 [2024-12-11 13:59:34.745317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.705 [2024-12-11 13:59:34.745367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:41.964 [2024-12-11 13:59:34.760008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016efb480 00:18:41.964 [2024-12-11 13:59:34.761431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.964 [2024-12-11 13:59:34.761481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:41.964 [2024-12-11 13:59:34.775996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016efbcf0 00:18:41.964 [2024-12-11 13:59:34.777441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.964 [2024-12-11 13:59:34.777490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:41.964 [2024-12-11 13:59:34.792057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016efc560 00:18:41.964 [2024-12-11 13:59:34.793447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.964 [2024-12-11 13:59:34.793497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:41.964 [2024-12-11 13:59:34.808086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016efcdd0 00:18:41.965 [2024-12-11 13:59:34.809510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.965 [2024-12-11 13:59:34.809559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:41.965 [2024-12-11 13:59:34.824287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016efd640 00:18:41.965 [2024-12-11 13:59:34.825626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.965 [2024-12-11 13:59:34.825675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:41.965 [2024-12-11 13:59:34.840459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016efdeb0 00:18:41.965 [2024-12-11 13:59:34.841851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.965 [2024-12-11 13:59:34.841886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:41.965 [2024-12-11 13:59:34.856998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016efe720 00:18:41.965 [2024-12-11 13:59:34.858343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.965 [2024-12-11 13:59:34.858378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:41.965 [2024-12-11 13:59:34.873567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eff3c8 00:18:41.965 [2024-12-11 13:59:34.874871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.965 [2024-12-11 13:59:34.874921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:41.965 [2024-12-11 13:59:34.896351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eff3c8 00:18:41.965 [2024-12-11 13:59:34.899016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.965 [2024-12-11 13:59:34.899065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.965 [2024-12-11 13:59:34.912844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016efe720 00:18:41.965 [2024-12-11 13:59:34.915427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.965 [2024-12-11 13:59:34.915496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:41.965 [2024-12-11 13:59:34.929237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016efdeb0 00:18:41.965 [2024-12-11 13:59:34.931781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.965 [2024-12-11 13:59:34.931835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:41.965 [2024-12-11 13:59:34.945561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016efd640 00:18:41.965 [2024-12-11 13:59:34.948069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.965 [2024-12-11 13:59:34.948106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:41.965 [2024-12-11 13:59:34.961668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016efcdd0 00:18:41.965 [2024-12-11 13:59:34.964155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.965 [2024-12-11 13:59:34.964208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:41.965 [2024-12-11 13:59:34.977808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016efc560 00:18:41.965 [2024-12-11 13:59:34.980297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.965 [2024-12-11 13:59:34.980350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:41.965 [2024-12-11 13:59:34.994016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016efbcf0 00:18:41.965 [2024-12-11 13:59:34.996522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:41.965 [2024-12-11 13:59:34.996575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:41.965 [2024-12-11 13:59:35.010360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016efb480 00:18:42.223 [2024-12-11 13:59:35.012863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-11 13:59:35.012915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:42.223 [2024-12-11 13:59:35.026846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016efac10 00:18:42.223 [2024-12-11 13:59:35.029226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-11 13:59:35.029295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:42.223 [2024-12-11 13:59:35.043995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016efa3a0 00:18:42.223 [2024-12-11 13:59:35.046358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-11 13:59:35.046397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:42.223 [2024-12-11 13:59:35.060243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef9b30 00:18:42.223 [2024-12-11 13:59:35.062651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-11 13:59:35.062728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:42.223 [2024-12-11 13:59:35.076687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef92c0 00:18:42.223 [2024-12-11 13:59:35.079022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-11 13:59:35.079072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:42.223 [2024-12-11 13:59:35.093130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef8a50 00:18:42.223 [2024-12-11 13:59:35.095516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-11 13:59:35.095568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:42.223 [2024-12-11 13:59:35.109620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef81e0 00:18:42.223 [2024-12-11 13:59:35.111935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-11 13:59:35.111974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:42.223 [2024-12-11 13:59:35.125768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef7970 00:18:42.223 [2024-12-11 13:59:35.128068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-11 13:59:35.128107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:42.223 [2024-12-11 13:59:35.141999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef7100 00:18:42.223 [2024-12-11 13:59:35.144306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-11 13:59:35.144358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:42.223 [2024-12-11 13:59:35.158182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef6890 00:18:42.223 [2024-12-11 13:59:35.160432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-11 13:59:35.160487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.223 [2024-12-11 13:59:35.174545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef6020 00:18:42.223 [2024-12-11 13:59:35.176828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-11 13:59:35.176865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:42.223 [2024-12-11 13:59:35.190945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef57b0 00:18:42.223 [2024-12-11 13:59:35.193133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-11 13:59:35.193169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:42.223 [2024-12-11 13:59:35.207038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef4f40 00:18:42.223 [2024-12-11 13:59:35.209268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.224 [2024-12-11 13:59:35.209303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:42.224 [2024-12-11 13:59:35.223595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef46d0 00:18:42.224 [2024-12-11 13:59:35.225828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.224 [2024-12-11 13:59:35.225863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:42.224 [2024-12-11 13:59:35.239802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef3e60 00:18:42.224 [2024-12-11 13:59:35.241938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.224 [2024-12-11 13:59:35.241974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:42.224 [2024-12-11 13:59:35.256111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef35f0 00:18:42.224 [2024-12-11 13:59:35.258293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.224 [2024-12-11 13:59:35.258328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:42.481 [2024-12-11 13:59:35.272587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef2d80 00:18:42.481 [2024-12-11 13:59:35.274735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.481 [2024-12-11 13:59:35.274793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:42.481 [2024-12-11 13:59:35.289308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef2510 00:18:42.481 [2024-12-11 13:59:35.291384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.481 [2024-12-11 13:59:35.291423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:42.481 [2024-12-11 13:59:35.305632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef1ca0 00:18:42.481 [2024-12-11 13:59:35.307764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.481 [2024-12-11 13:59:35.307815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:42.481 [2024-12-11 13:59:35.321952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef1430 00:18:42.481 [2024-12-11 13:59:35.324053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.481 [2024-12-11 13:59:35.324104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:42.481 [2024-12-11 13:59:35.338326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef0bc0 00:18:42.481 [2024-12-11 13:59:35.340413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.481 [2024-12-11 13:59:35.340450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:42.482 [2024-12-11 13:59:35.354494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef0350 00:18:42.482 [2024-12-11 13:59:35.356623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.482 [2024-12-11 13:59:35.356658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:42.482 [2024-12-11 13:59:35.370951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eefae0 00:18:42.482 [2024-12-11 13:59:35.373003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.482 [2024-12-11 13:59:35.373039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:42.482 [2024-12-11 13:59:35.387315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eef270 00:18:42.482 [2024-12-11 13:59:35.389299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.482 [2024-12-11 13:59:35.389350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:42.482 [2024-12-11 13:59:35.403616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eeea00 00:18:42.482 [2024-12-11 13:59:35.405587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.482 [2024-12-11 13:59:35.405650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:42.482 [2024-12-11 13:59:35.419722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eee190 00:18:42.482 [2024-12-11 13:59:35.421656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.482 [2024-12-11 13:59:35.421691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.482 [2024-12-11 13:59:35.435894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eed920 00:18:42.482 [2024-12-11 13:59:35.437900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.482 [2024-12-11 13:59:35.437950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:42.482 [2024-12-11 13:59:35.452261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eed0b0 00:18:42.482 [2024-12-11 13:59:35.454189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.482 [2024-12-11 13:59:35.454224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:42.482 [2024-12-11 13:59:35.468630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eec840 00:18:42.482 [2024-12-11 13:59:35.470575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.482 [2024-12-11 13:59:35.470608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:42.482 [2024-12-11 13:59:35.484759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eebfd0 00:18:42.482 [2024-12-11 13:59:35.486710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.482 [2024-12-11 13:59:35.486767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:42.482 [2024-12-11 13:59:35.500943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eeb760 00:18:42.482 [2024-12-11 13:59:35.502767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.482 [2024-12-11 13:59:35.502798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:42.482 [2024-12-11 13:59:35.517123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eeaef0 00:18:42.482 [2024-12-11 13:59:35.518997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.482 [2024-12-11 13:59:35.519031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:42.740 [2024-12-11 13:59:35.533095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eea680 00:18:42.740 [2024-12-11 13:59:35.534891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.740 [2024-12-11 13:59:35.534927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:42.740 [2024-12-11 13:59:35.549093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee9e10 00:18:42.740 [2024-12-11 13:59:35.550885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.740 [2024-12-11 13:59:35.550920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:42.740 [2024-12-11 13:59:35.565125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee95a0 00:18:42.740 [2024-12-11 13:59:35.566904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.740 [2024-12-11 13:59:35.566939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:42.740 [2024-12-11 13:59:35.581705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee8d30 00:18:42.740 [2024-12-11 13:59:35.583467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.740 [2024-12-11 13:59:35.583523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:42.740 [2024-12-11 13:59:35.598680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee84c0 00:18:42.740 [2024-12-11 13:59:35.600418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.740 [2024-12-11 13:59:35.600456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:42.740 [2024-12-11 13:59:35.614799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee7c50 00:18:42.740 15435.00 IOPS, 60.29 MiB/s [2024-12-11T13:59:35.787Z] [2024-12-11 13:59:35.616556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.740 [2024-12-11 13:59:35.616623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:42.740 [2024-12-11 13:59:35.630939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee73e0 00:18:42.740 [2024-12-11 13:59:35.632635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.740 [2024-12-11 13:59:35.632670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:42.740 [2024-12-11 13:59:35.647505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee6b70 00:18:42.740 [2024-12-11 13:59:35.649124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.740 [2024-12-11 13:59:35.649158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:42.740 [2024-12-11 13:59:35.663907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee6300 00:18:42.740 [2024-12-11 13:59:35.665520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.740 [2024-12-11 13:59:35.665570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:42.740 [2024-12-11 13:59:35.680450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee5a90 00:18:42.740 [2024-12-11 13:59:35.682077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.740 [2024-12-11 13:59:35.682143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.740 [2024-12-11 13:59:35.696771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee5220 00:18:42.740 [2024-12-11 13:59:35.698319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.740 [2024-12-11 13:59:35.698349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:42.740 [2024-12-11 13:59:35.713080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee49b0 00:18:42.740 [2024-12-11 13:59:35.714654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.740 [2024-12-11 13:59:35.714689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:42.740 [2024-12-11 13:59:35.729605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee4140 00:18:42.740 [2024-12-11 13:59:35.731205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.741 [2024-12-11 13:59:35.731243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:42.741 [2024-12-11 13:59:35.745988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee38d0 00:18:42.741 [2024-12-11 13:59:35.747557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.741 [2024-12-11 13:59:35.747610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:42.741 [2024-12-11 13:59:35.762188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee3060 00:18:42.741 [2024-12-11 13:59:35.763676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.741 [2024-12-11 13:59:35.763740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:42.741 [2024-12-11 13:59:35.778390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee27f0 00:18:42.741 [2024-12-11 13:59:35.779917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.741 [2024-12-11 13:59:35.779955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:42.999 [2024-12-11 13:59:35.794666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee1f80 00:18:42.999 [2024-12-11 13:59:35.796222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.999 [2024-12-11 13:59:35.796272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:42.999 [2024-12-11 13:59:35.811149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee1710 00:18:42.999 [2024-12-11 13:59:35.812617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.999 [2024-12-11 13:59:35.812683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:43.000 [2024-12-11 13:59:35.827255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee0ea0 00:18:43.000 [2024-12-11 13:59:35.828620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.000 [2024-12-11 13:59:35.828668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:43.000 [2024-12-11 13:59:35.843252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee0630 00:18:43.000 [2024-12-11 13:59:35.844615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.000 [2024-12-11 13:59:35.844649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:43.000 [2024-12-11 13:59:35.859154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016edfdc0 00:18:43.000 [2024-12-11 13:59:35.860570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.000 [2024-12-11 13:59:35.860618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:43.000 [2024-12-11 13:59:35.875541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016edf550 00:18:43.000 [2024-12-11 13:59:35.876955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.000 [2024-12-11 13:59:35.876990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:43.000 [2024-12-11 13:59:35.891483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016edece0 00:18:43.000 [2024-12-11 13:59:35.892916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.000 [2024-12-11 13:59:35.892954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:43.000 [2024-12-11 13:59:35.907990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ede470 00:18:43.000 [2024-12-11 13:59:35.909358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.000 [2024-12-11 13:59:35.909408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:43.000 [2024-12-11 13:59:35.931229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eddc00 00:18:43.000 [2024-12-11 13:59:35.933784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.000 [2024-12-11 13:59:35.933836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:43.000 [2024-12-11 13:59:35.947401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ede470 00:18:43.000 [2024-12-11 13:59:35.950048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.000 [2024-12-11 13:59:35.950079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:43.000 [2024-12-11 13:59:35.963657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016edece0 00:18:43.000 [2024-12-11 13:59:35.966163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.000 [2024-12-11 13:59:35.966210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:43.000 [2024-12-11 13:59:35.979944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016edf550 00:18:43.000 [2024-12-11 13:59:35.982405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.000 [2024-12-11 13:59:35.982439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:43.000 [2024-12-11 13:59:35.996077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016edfdc0 00:18:43.000 [2024-12-11 13:59:35.998534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.000 [2024-12-11 13:59:35.998567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:43.000 [2024-12-11 13:59:36.012229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee0630 00:18:43.000 [2024-12-11 13:59:36.014706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.000 [2024-12-11 13:59:36.014745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:43.000 [2024-12-11 13:59:36.028240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee0ea0 00:18:43.000 [2024-12-11 13:59:36.030741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.000 [2024-12-11 13:59:36.030774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:43.000 [2024-12-11 13:59:36.044279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee1710 00:18:43.259 [2024-12-11 13:59:36.046724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.259 [2024-12-11 13:59:36.046764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:43.259 [2024-12-11 13:59:36.060417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee1f80 00:18:43.259 [2024-12-11 13:59:36.062839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.259 [2024-12-11 13:59:36.062871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:43.259 [2024-12-11 13:59:36.076601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee27f0 00:18:43.259 [2024-12-11 13:59:36.079047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.259 [2024-12-11 13:59:36.079083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:43.259 [2024-12-11 13:59:36.093002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee3060 00:18:43.259 [2024-12-11 13:59:36.095392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.260 [2024-12-11 13:59:36.095431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:43.260 [2024-12-11 13:59:36.109722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee38d0 00:18:43.260 [2024-12-11 13:59:36.112099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.260 [2024-12-11 13:59:36.112137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:43.260 [2024-12-11 13:59:36.126827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee4140 00:18:43.260 [2024-12-11 13:59:36.129135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.260 [2024-12-11 13:59:36.129191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:43.260 [2024-12-11 13:59:36.143107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee49b0 00:18:43.260 [2024-12-11 13:59:36.145390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.260 [2024-12-11 13:59:36.145428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:43.260 [2024-12-11 13:59:36.159123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee5220 00:18:43.260 [2024-12-11 13:59:36.161455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.260 [2024-12-11 13:59:36.161491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:43.260 [2024-12-11 13:59:36.175807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee5a90 00:18:43.260 [2024-12-11 13:59:36.178068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.260 [2024-12-11 13:59:36.178104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:43.260 [2024-12-11 13:59:36.192269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee6300 00:18:43.260 [2024-12-11 13:59:36.194573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.260 [2024-12-11 13:59:36.194622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:43.260 [2024-12-11 13:59:36.208574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee6b70 00:18:43.260 [2024-12-11 13:59:36.210867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.260 [2024-12-11 13:59:36.210900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:43.260 [2024-12-11 13:59:36.224863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee73e0 00:18:43.260 [2024-12-11 13:59:36.227037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.260 [2024-12-11 13:59:36.227087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:43.260 [2024-12-11 13:59:36.241413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee7c50 00:18:43.260 [2024-12-11 13:59:36.243638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.260 [2024-12-11 13:59:36.243689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:43.260 [2024-12-11 13:59:36.257614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee84c0 00:18:43.260 [2024-12-11 13:59:36.259833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.260 [2024-12-11 13:59:36.259884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:43.260 [2024-12-11 13:59:36.274110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee8d30 00:18:43.260 [2024-12-11 13:59:36.276272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.260 [2024-12-11 13:59:36.276310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:43.260 [2024-12-11 13:59:36.290300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee95a0 00:18:43.260 [2024-12-11 13:59:36.292454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.260 [2024-12-11 13:59:36.292506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:43.519 [2024-12-11 13:59:36.306562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ee9e10 00:18:43.519 [2024-12-11 13:59:36.308768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.519 [2024-12-11 13:59:36.308805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:43.519 [2024-12-11 13:59:36.322910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eea680 00:18:43.519 [2024-12-11 13:59:36.325057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.519 [2024-12-11 13:59:36.325105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:43.519 [2024-12-11 13:59:36.339260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eeaef0 00:18:43.519 [2024-12-11 13:59:36.341323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.519 [2024-12-11 13:59:36.341358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:43.519 [2024-12-11 13:59:36.355467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eeb760 00:18:43.519 [2024-12-11 13:59:36.357552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.519 [2024-12-11 13:59:36.357600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:43.519 [2024-12-11 13:59:36.371903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eebfd0 00:18:43.520 [2024-12-11 13:59:36.373975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.520 [2024-12-11 13:59:36.374010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:43.520 [2024-12-11 13:59:36.388274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eec840 00:18:43.520 [2024-12-11 13:59:36.390333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.520 [2024-12-11 13:59:36.390382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:43.520 [2024-12-11 13:59:36.404437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eed0b0 00:18:43.520 [2024-12-11 13:59:36.406429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.520 [2024-12-11 13:59:36.406477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:43.520 [2024-12-11 13:59:36.420895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eed920 00:18:43.520 [2024-12-11 13:59:36.422941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.520 [2024-12-11 13:59:36.422989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:43.520 [2024-12-11 13:59:36.437226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eee190 00:18:43.520 [2024-12-11 13:59:36.439224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.520 [2024-12-11 13:59:36.439260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:43.520 [2024-12-11 13:59:36.453539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eeea00 00:18:43.520 [2024-12-11 13:59:36.455540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.520 [2024-12-11 13:59:36.455590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:43.520 [2024-12-11 13:59:36.470041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eef270 00:18:43.520 [2024-12-11 13:59:36.472013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.520 [2024-12-11 13:59:36.472050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:43.520 [2024-12-11 13:59:36.486477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016eefae0 00:18:43.520 [2024-12-11 13:59:36.488394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.520 [2024-12-11 13:59:36.488431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:43.520 [2024-12-11 13:59:36.502639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef0350 00:18:43.520 [2024-12-11 13:59:36.504615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.520 [2024-12-11 13:59:36.504667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:43.520 [2024-12-11 13:59:36.518840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef0bc0 00:18:43.520 [2024-12-11 13:59:36.520725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.520 [2024-12-11 13:59:36.520771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:43.520 [2024-12-11 13:59:36.535104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef1430 00:18:43.520 [2024-12-11 13:59:36.536886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.520 [2024-12-11 13:59:36.536951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:43.520 [2024-12-11 13:59:36.551322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef1ca0 00:18:43.520 [2024-12-11 13:59:36.553177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.520 [2024-12-11 13:59:36.553211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:43.779 [2024-12-11 13:59:36.567352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef2510 00:18:43.779 [2024-12-11 13:59:36.569222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.779 [2024-12-11 13:59:36.569271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:43.779 [2024-12-11 13:59:36.583521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef2d80 00:18:43.779 [2024-12-11 13:59:36.585304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.779 [2024-12-11 13:59:36.585336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:43.779 [2024-12-11 13:59:36.599684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef35f0 00:18:43.779 [2024-12-11 13:59:36.601473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.779 [2024-12-11 13:59:36.601522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:43.779 15497.50 IOPS, 60.54 MiB/s [2024-12-11T13:59:36.826Z] [2024-12-11 13:59:36.616009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702770) with pdu=0x200016ef3e60 00:18:43.779 [2024-12-11 13:59:36.617729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.779 [2024-12-11 13:59:36.617763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:43.779 00:18:43.779 Latency(us) 00:18:43.779 [2024-12-11T13:59:36.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.779 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:43.779 nvme0n1 : 2.01 15516.97 60.61 0.00 0.00 8241.45 2412.92 31218.97 00:18:43.779 [2024-12-11T13:59:36.826Z] =================================================================================================================== 00:18:43.779 [2024-12-11T13:59:36.826Z] Total : 15516.97 60.61 0.00 0.00 8241.45 2412.92 31218.97 00:18:43.779 { 00:18:43.779 "results": [ 00:18:43.779 { 00:18:43.779 "job": "nvme0n1", 00:18:43.779 "core_mask": "0x2", 00:18:43.779 "workload": "randwrite", 00:18:43.779 "status": "finished", 00:18:43.779 "queue_depth": 128, 00:18:43.779 "io_size": 4096, 00:18:43.779 "runtime": 2.005739, 00:18:43.779 "iops": 15516.974042983658, 00:18:43.779 "mibps": 60.613179855404915, 00:18:43.779 "io_failed": 0, 00:18:43.779 "io_timeout": 0, 00:18:43.779 "avg_latency_us": 8241.447086486756, 00:18:43.779 "min_latency_us": 2412.9163636363637, 00:18:43.779 "max_latency_us": 31218.967272727274 00:18:43.779 } 00:18:43.779 ], 00:18:43.779 "core_count": 1 00:18:43.779 } 00:18:43.779 13:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:43.779 13:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:43.779 | .driver_specific 00:18:43.779 | .nvme_error 00:18:43.779 | .status_code 00:18:43.779 | .command_transient_transport_error' 00:18:43.779 13:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:43.779 13:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:44.038 13:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 122 > 0 )) 00:18:44.038 13:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 81723 00:18:44.038 13:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 81723 ']' 00:18:44.038 13:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 81723 00:18:44.038 13:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:44.038 13:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.038 13:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81723 00:18:44.038 13:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:44.038 13:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:44.038 killing process with pid 81723 00:18:44.038 13:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81723' 00:18:44.038 Received shutdown signal, test time was about 2.000000 seconds 00:18:44.038 00:18:44.038 Latency(us) 00:18:44.038 [2024-12-11T13:59:37.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.038 [2024-12-11T13:59:37.085Z] =================================================================================================================== 00:18:44.038 [2024-12-11T13:59:37.085Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:44.038 13:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 81723 00:18:44.038 13:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 81723 00:18:44.297 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:44.297 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:44.297 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:44.297 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:44.297 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:44.297 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=81776 00:18:44.297 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 81776 /var/tmp/bperf.sock 00:18:44.297 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:44.297 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 81776 ']' 00:18:44.297 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:44.297 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.297 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:44.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:44.297 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.297 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:44.297 [2024-12-11 13:59:37.200278] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:44.297 [2024-12-11 13:59:37.200598] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81776 ] 00:18:44.297 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:44.297 Zero copy mechanism will not be used. 00:18:44.555 [2024-12-11 13:59:37.349247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.555 [2024-12-11 13:59:37.407019] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.555 [2024-12-11 13:59:37.462777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:44.555 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.555 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:44.555 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:44.555 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:44.814 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:44.814 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.814 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:44.814 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.814 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:44.814 13:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:45.381 nvme0n1 00:18:45.381 13:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:45.381 13:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.381 13:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:45.381 13:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.381 13:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:45.381 13:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:45.381 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:45.381 Zero copy mechanism will not be used. 00:18:45.381 Running I/O for 2 seconds... 00:18:45.381 [2024-12-11 13:59:38.272144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.381 [2024-12-11 13:59:38.272285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.381 [2024-12-11 13:59:38.272316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.381 [2024-12-11 13:59:38.277707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.381 [2024-12-11 13:59:38.277813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.381 [2024-12-11 13:59:38.277838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.381 [2024-12-11 13:59:38.282953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.381 [2024-12-11 13:59:38.283029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.381 [2024-12-11 13:59:38.283053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.381 [2024-12-11 13:59:38.288114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.381 [2024-12-11 13:59:38.288215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.381 [2024-12-11 13:59:38.288238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.381 [2024-12-11 13:59:38.293343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.381 [2024-12-11 13:59:38.293579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.381 [2024-12-11 13:59:38.293602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.381 [2024-12-11 13:59:38.298604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.381 [2024-12-11 13:59:38.298678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.381 [2024-12-11 13:59:38.298713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.381 [2024-12-11 13:59:38.303695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.381 [2024-12-11 13:59:38.303786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.381 [2024-12-11 13:59:38.303809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.381 [2024-12-11 13:59:38.308893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.381 [2024-12-11 13:59:38.308993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.381 [2024-12-11 13:59:38.309015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.381 [2024-12-11 13:59:38.314145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.381 [2024-12-11 13:59:38.314220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.381 [2024-12-11 13:59:38.314243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.381 [2024-12-11 13:59:38.319382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.381 [2024-12-11 13:59:38.319456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.381 [2024-12-11 13:59:38.319478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.381 [2024-12-11 13:59:38.324716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.381 [2024-12-11 13:59:38.324835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.381 [2024-12-11 13:59:38.324858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.381 [2024-12-11 13:59:38.329928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.381 [2024-12-11 13:59:38.330012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.381 [2024-12-11 13:59:38.330035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.381 [2024-12-11 13:59:38.335126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.381 [2024-12-11 13:59:38.335213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.381 [2024-12-11 13:59:38.335235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.381 [2024-12-11 13:59:38.340001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.381 [2024-12-11 13:59:38.340254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.381 [2024-12-11 13:59:38.340292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.381 [2024-12-11 13:59:38.345073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.381 [2024-12-11 13:59:38.345152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.381 [2024-12-11 13:59:38.345183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.381 [2024-12-11 13:59:38.350211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.381 [2024-12-11 13:59:38.350297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.381 [2024-12-11 13:59:38.350321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.381 [2024-12-11 13:59:38.355517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.381 [2024-12-11 13:59:38.355601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.381 [2024-12-11 13:59:38.355624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.381 [2024-12-11 13:59:38.360644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.381 [2024-12-11 13:59:38.360764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.381 [2024-12-11 13:59:38.360787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.381 [2024-12-11 13:59:38.365971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.381 [2024-12-11 13:59:38.366064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.381 [2024-12-11 13:59:38.366086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.381 [2024-12-11 13:59:38.371267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.381 [2024-12-11 13:59:38.371338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.381 [2024-12-11 13:59:38.371361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.381 [2024-12-11 13:59:38.376513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.381 [2024-12-11 13:59:38.376586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.382 [2024-12-11 13:59:38.376610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.382 [2024-12-11 13:59:38.381746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.382 [2024-12-11 13:59:38.381842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.382 [2024-12-11 13:59:38.381864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.382 [2024-12-11 13:59:38.386984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.382 [2024-12-11 13:59:38.387055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.382 [2024-12-11 13:59:38.387077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.382 [2024-12-11 13:59:38.392185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.382 [2024-12-11 13:59:38.392272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.382 [2024-12-11 13:59:38.392294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.382 [2024-12-11 13:59:38.397416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.382 [2024-12-11 13:59:38.397645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.382 [2024-12-11 13:59:38.397668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.382 [2024-12-11 13:59:38.402835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.382 [2024-12-11 13:59:38.402922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.382 [2024-12-11 13:59:38.402945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.382 [2024-12-11 13:59:38.408091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.382 [2024-12-11 13:59:38.408163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.382 [2024-12-11 13:59:38.408186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.382 [2024-12-11 13:59:38.413333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.382 [2024-12-11 13:59:38.413562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.382 [2024-12-11 13:59:38.413584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.382 [2024-12-11 13:59:38.418745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.382 [2024-12-11 13:59:38.418844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.382 [2024-12-11 13:59:38.418867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.382 [2024-12-11 13:59:38.424007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.382 [2024-12-11 13:59:38.424090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.382 [2024-12-11 13:59:38.424113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.641 [2024-12-11 13:59:38.429256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.641 [2024-12-11 13:59:38.429479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.641 [2024-12-11 13:59:38.429501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.641 [2024-12-11 13:59:38.434596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.641 [2024-12-11 13:59:38.434686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.641 [2024-12-11 13:59:38.434709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.641 [2024-12-11 13:59:38.439868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.641 [2024-12-11 13:59:38.439938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.641 [2024-12-11 13:59:38.439961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.641 [2024-12-11 13:59:38.445184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.641 [2024-12-11 13:59:38.445286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.641 [2024-12-11 13:59:38.445308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.641 [2024-12-11 13:59:38.450484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.641 [2024-12-11 13:59:38.450570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.641 [2024-12-11 13:59:38.450608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.641 [2024-12-11 13:59:38.455838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.641 [2024-12-11 13:59:38.455924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.641 [2024-12-11 13:59:38.455946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.641 [2024-12-11 13:59:38.461004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.641 [2024-12-11 13:59:38.461088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.641 [2024-12-11 13:59:38.461109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.641 [2024-12-11 13:59:38.466127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.641 [2024-12-11 13:59:38.466200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.641 [2024-12-11 13:59:38.466223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.641 [2024-12-11 13:59:38.471385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.641 [2024-12-11 13:59:38.471641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.641 [2024-12-11 13:59:38.471663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.641 [2024-12-11 13:59:38.476857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.641 [2024-12-11 13:59:38.476943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.641 [2024-12-11 13:59:38.476966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.641 [2024-12-11 13:59:38.482261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.641 [2024-12-11 13:59:38.482348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.641 [2024-12-11 13:59:38.482370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.641 [2024-12-11 13:59:38.487497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.641 [2024-12-11 13:59:38.487691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.641 [2024-12-11 13:59:38.487728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.641 [2024-12-11 13:59:38.492885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.641 [2024-12-11 13:59:38.492961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.641 [2024-12-11 13:59:38.492983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.641 [2024-12-11 13:59:38.498165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.641 [2024-12-11 13:59:38.498283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.641 [2024-12-11 13:59:38.498305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.641 [2024-12-11 13:59:38.503408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.641 [2024-12-11 13:59:38.503663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.641 [2024-12-11 13:59:38.503686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.641 [2024-12-11 13:59:38.508734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.508821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.508843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.513892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.513978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.514001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.519090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.519324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.519347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.524640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.524867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.526470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.531322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.531549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.531726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.536584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.536830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.537114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.541882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.542116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.542265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.547479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.547551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.547575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.552757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.552844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.552867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.557853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.557924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.557947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.562964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.563032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.563055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.568046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.568117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.568139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.573141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.573337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.573360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.578389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.578461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.578483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.583546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.583618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.583641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.588628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.588837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.588861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.593880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.593952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.593975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.598988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.599060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.599083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.604170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.604360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.604383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.609497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.609583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.609605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.614603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.614690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.614725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.619790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.619880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.619902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.624928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.625002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.625024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.630053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.630125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.630147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.635247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.635444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.635466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.642 [2024-12-11 13:59:38.640535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.642 [2024-12-11 13:59:38.640756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.642 [2024-12-11 13:59:38.640978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.643 [2024-12-11 13:59:38.645752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.643 [2024-12-11 13:59:38.645971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.643 [2024-12-11 13:59:38.646155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.643 [2024-12-11 13:59:38.651017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.643 [2024-12-11 13:59:38.651241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.643 [2024-12-11 13:59:38.651412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.643 [2024-12-11 13:59:38.656280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.643 [2024-12-11 13:59:38.656518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.643 [2024-12-11 13:59:38.656769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.643 [2024-12-11 13:59:38.661431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.643 [2024-12-11 13:59:38.661655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.643 [2024-12-11 13:59:38.661898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.643 [2024-12-11 13:59:38.666653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.643 [2024-12-11 13:59:38.666891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.643 [2024-12-11 13:59:38.667102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.643 [2024-12-11 13:59:38.671964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.643 [2024-12-11 13:59:38.672182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.643 [2024-12-11 13:59:38.672392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.643 [2024-12-11 13:59:38.677228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.643 [2024-12-11 13:59:38.677423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.643 [2024-12-11 13:59:38.677447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.643 [2024-12-11 13:59:38.682469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.643 [2024-12-11 13:59:38.682665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.643 [2024-12-11 13:59:38.682688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.687717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.687786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.687810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.692841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.692912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.692935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.697942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.698013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.698036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.703064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.703147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.703170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.708234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.708305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.708327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.713339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.713411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.713434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.718450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.718647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.718670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.723854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.724072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.724225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.729147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.729374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.729530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.734455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.734671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.734843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.739734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.739961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.740131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.745085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.745307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.745455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.750403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.750616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.750788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.755759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.755994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.756228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.761043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.761260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.761482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.766248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.766484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.766629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.771568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.771771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.771795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.776809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.776901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.776923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.781882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.781953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.781976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.787009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.787081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.787115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.792153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.792226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.792249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.902 [2024-12-11 13:59:38.797333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.902 [2024-12-11 13:59:38.797537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.902 [2024-12-11 13:59:38.797559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.802738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.802810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.802833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.807930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.808002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.808024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.813182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.813254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.813275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.818390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.818462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.818486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.823552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.823623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.823646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.828869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.828973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.828995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.833955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.834027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.834050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.839062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.839145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.839167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.844118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.844207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.844229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.849306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.849383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.849406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.854466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.854548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.854570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.859642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.859758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.859782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.864876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.864944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.864967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.870181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.870257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.870288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.875597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.875855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.875887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.881089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.881195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.881218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.886474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.886557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.886579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.891920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.892012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.892033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.897458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.897565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.897588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.902998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.903124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.903158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.908491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.908594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.908616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.913930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.914009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.914030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.919150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.919243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.919266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.924453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.924519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.924541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.929797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.929872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.929895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.934968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.935056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.935079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.940332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.940419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.903 [2024-12-11 13:59:38.940440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:45.903 [2024-12-11 13:59:38.945612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:45.903 [2024-12-11 13:59:38.945849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.904 [2024-12-11 13:59:38.945872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.162 [2024-12-11 13:59:38.951036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.162 [2024-12-11 13:59:38.951158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.162 [2024-12-11 13:59:38.951180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.162 [2024-12-11 13:59:38.956306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.162 [2024-12-11 13:59:38.956441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.162 [2024-12-11 13:59:38.956463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.162 [2024-12-11 13:59:38.961600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.162 [2024-12-11 13:59:38.961842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.162 [2024-12-11 13:59:38.961865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.162 [2024-12-11 13:59:38.967133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.162 [2024-12-11 13:59:38.967227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.162 [2024-12-11 13:59:38.967250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.162 [2024-12-11 13:59:38.972390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.162 [2024-12-11 13:59:38.972480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.162 [2024-12-11 13:59:38.972503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.162 [2024-12-11 13:59:38.977595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.162 [2024-12-11 13:59:38.977837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:38.977860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:38.982988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:38.983084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:38.983132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:38.988348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:38.988446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:38.988468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:38.993494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:38.993760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:38.993783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:38.998928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:38.999002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:38.999025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.004125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.004224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:39.004261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.009438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.009664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:39.009687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.014772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.014861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:39.014884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.020126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.020226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:39.020248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.025512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.025775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:39.025798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.030927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.031024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:39.031047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.036223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.036304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:39.036325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.041504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.041764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:39.041786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.046931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.047031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:39.047053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.052216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.052311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:39.052333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.057529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.057841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:39.057863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.062996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.063108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:39.063130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.068266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.068340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:39.068362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.073585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.073818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:39.073841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.078962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.079045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:39.079067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.084175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.084272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:39.084294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.089439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.089685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:39.089707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.094789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.094885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:39.094906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.100098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.100180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:39.100218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.105247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.105502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:39.105525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.110705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.110836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.163 [2024-12-11 13:59:39.110859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.163 [2024-12-11 13:59:39.115990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.163 [2024-12-11 13:59:39.116077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.164 [2024-12-11 13:59:39.116114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.164 [2024-12-11 13:59:39.121146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.164 [2024-12-11 13:59:39.121363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.164 [2024-12-11 13:59:39.121385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.164 [2024-12-11 13:59:39.126437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.164 [2024-12-11 13:59:39.126535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.164 [2024-12-11 13:59:39.126556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.164 [2024-12-11 13:59:39.131694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.164 [2024-12-11 13:59:39.131819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.164 [2024-12-11 13:59:39.131842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.164 [2024-12-11 13:59:39.137039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.164 [2024-12-11 13:59:39.137117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.164 [2024-12-11 13:59:39.137138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.164 [2024-12-11 13:59:39.142344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.164 [2024-12-11 13:59:39.142411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.164 [2024-12-11 13:59:39.142433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.164 [2024-12-11 13:59:39.147592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.164 [2024-12-11 13:59:39.147662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.164 [2024-12-11 13:59:39.147683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.164 [2024-12-11 13:59:39.152757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.164 [2024-12-11 13:59:39.152842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.164 [2024-12-11 13:59:39.152865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.164 [2024-12-11 13:59:39.158057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.164 [2024-12-11 13:59:39.158129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.164 [2024-12-11 13:59:39.158149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.164 [2024-12-11 13:59:39.163233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.164 [2024-12-11 13:59:39.163312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.164 [2024-12-11 13:59:39.163335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.164 [2024-12-11 13:59:39.168487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.164 [2024-12-11 13:59:39.168697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.164 [2024-12-11 13:59:39.168719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.164 [2024-12-11 13:59:39.173955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.164 [2024-12-11 13:59:39.174035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.164 [2024-12-11 13:59:39.174057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.164 [2024-12-11 13:59:39.179163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.164 [2024-12-11 13:59:39.179238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.164 [2024-12-11 13:59:39.179259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.164 [2024-12-11 13:59:39.184559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.164 [2024-12-11 13:59:39.184779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.164 [2024-12-11 13:59:39.184801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.164 [2024-12-11 13:59:39.190101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.164 [2024-12-11 13:59:39.190226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.164 [2024-12-11 13:59:39.190248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.164 [2024-12-11 13:59:39.195589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.164 [2024-12-11 13:59:39.195686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.164 [2024-12-11 13:59:39.195708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.164 [2024-12-11 13:59:39.200980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.164 [2024-12-11 13:59:39.201063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.164 [2024-12-11 13:59:39.201085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.164 [2024-12-11 13:59:39.206092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.164 [2024-12-11 13:59:39.206161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.164 [2024-12-11 13:59:39.206183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.211484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.211576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.211598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.216772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.216843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.216865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.222106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.222174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.222196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.227372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.227446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.227482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.232706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.232961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.232983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.238196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.238291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.238313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.243570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.243661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.243683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.248816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.248887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.248910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.253861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.253938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.253960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.258943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.259017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.259039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.424 5819.00 IOPS, 727.38 MiB/s [2024-12-11T13:59:39.471Z] [2024-12-11 13:59:39.265489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.265570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.265593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.270745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.270851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.270873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.275864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.275932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.275954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.281075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.281298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.281514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.286382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.286611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.286865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.291719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.291803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.291825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.296850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.296935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.296957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.301965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.302042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.302065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.307129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.307208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.307230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.312289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.312496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.312517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.317545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.317617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.317639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.322822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.322903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.322925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.327969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.328053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.328075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.333149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.333232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.333253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.338227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.424 [2024-12-11 13:59:39.338303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.424 [2024-12-11 13:59:39.338325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.424 [2024-12-11 13:59:39.343397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.343493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.343514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.348533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.348760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.348783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.353747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.353869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.353891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.358900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.358979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.359001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.364140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.364222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.364244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.369256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.369364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.369386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.374465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.374591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.374612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.379766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.379888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.379911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.384830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.384936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.384958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.389921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.390004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.390025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.394834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.394939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.394970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.400057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.400239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.400272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.405234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.405397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.405427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.410246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.410364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.410394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.415283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.415352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.415376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.420227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.420341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.420364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.425480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.425557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.425580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.430581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.430660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.430682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.435670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.435794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.435817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.440886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.440978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.441000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.446076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.446220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.446242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.451517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.451625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.451647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.456692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.456797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.456819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.461878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.461960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.461982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.425 [2024-12-11 13:59:39.466956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.425 [2024-12-11 13:59:39.467034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.425 [2024-12-11 13:59:39.467056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.472080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.472158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.472180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.477198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.477281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.477303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.482491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.482596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.482618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.487868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.487962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.487983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.493037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.493133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.493155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.498271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.498366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.498387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.503562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.503656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.503678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.508834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.508921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.508942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.514009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.514121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.514142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.519256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.519337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.519360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.524650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.524763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.524797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.529961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.530054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.530091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.535247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.535332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.535356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.540641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.540743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.540765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.545931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.546029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.546051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.551199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.551290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.551311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.556486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.556576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.556597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.561863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.561955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.561977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.567177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.567259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.567280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.572512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.572598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.572620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.577806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.577883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.577904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.583022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.583144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.583167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.588301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.588394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.588415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.593656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.593779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.593800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.599047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.599138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.599160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.604386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.604460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.685 [2024-12-11 13:59:39.604482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.685 [2024-12-11 13:59:39.609531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.685 [2024-12-11 13:59:39.609620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.609642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.614759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.614870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.614890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.620072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.620172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.620208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.625530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.625602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.625625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.630728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.630839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.630860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.635984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.636066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.636087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.641206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.641289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.641315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.646442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.646532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.646554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.651776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.651872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.651894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.656948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.657061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.657082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.662168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.662251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.662274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.667348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.667432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.667455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.672536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.672630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.672652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.677693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.677799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.677821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.682859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.682953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.682974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.688222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.688306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.688328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.693488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.693575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.693597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.698635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.698731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.698768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.703724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.703828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.703849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.708955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.709029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.709052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.714303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.714399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.714421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.719531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.719629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.719651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.724683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.724777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.724799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.686 [2024-12-11 13:59:39.729917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.686 [2024-12-11 13:59:39.730010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.686 [2024-12-11 13:59:39.730034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.946 [2024-12-11 13:59:39.735312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.946 [2024-12-11 13:59:39.735397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-12-11 13:59:39.735420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.946 [2024-12-11 13:59:39.740499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.946 [2024-12-11 13:59:39.740583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-12-11 13:59:39.740606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.946 [2024-12-11 13:59:39.745760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.946 [2024-12-11 13:59:39.745858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-12-11 13:59:39.745880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.946 [2024-12-11 13:59:39.750930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.946 [2024-12-11 13:59:39.751029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-12-11 13:59:39.751051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.946 [2024-12-11 13:59:39.756266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.946 [2024-12-11 13:59:39.756352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-12-11 13:59:39.756374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.946 [2024-12-11 13:59:39.761446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.946 [2024-12-11 13:59:39.761547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-12-11 13:59:39.761569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.946 [2024-12-11 13:59:39.766722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.946 [2024-12-11 13:59:39.766852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-12-11 13:59:39.766874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.946 [2024-12-11 13:59:39.772049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.946 [2024-12-11 13:59:39.772133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-12-11 13:59:39.772155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.946 [2024-12-11 13:59:39.777427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.946 [2024-12-11 13:59:39.777503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-12-11 13:59:39.777525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.946 [2024-12-11 13:59:39.782763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.946 [2024-12-11 13:59:39.782839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-12-11 13:59:39.782861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.946 [2024-12-11 13:59:39.787889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.946 [2024-12-11 13:59:39.787973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-12-11 13:59:39.787995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.946 [2024-12-11 13:59:39.793231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.946 [2024-12-11 13:59:39.793309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-12-11 13:59:39.793331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.946 [2024-12-11 13:59:39.798573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.946 [2024-12-11 13:59:39.798685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-12-11 13:59:39.798707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.946 [2024-12-11 13:59:39.804139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.946 [2024-12-11 13:59:39.804240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-12-11 13:59:39.804262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.946 [2024-12-11 13:59:39.809341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.946 [2024-12-11 13:59:39.809415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-12-11 13:59:39.809437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.946 [2024-12-11 13:59:39.814652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.946 [2024-12-11 13:59:39.814790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-12-11 13:59:39.814812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.946 [2024-12-11 13:59:39.820035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.946 [2024-12-11 13:59:39.820110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-12-11 13:59:39.820131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.946 [2024-12-11 13:59:39.825221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.946 [2024-12-11 13:59:39.825307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-12-11 13:59:39.825330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.946 [2024-12-11 13:59:39.830492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.946 [2024-12-11 13:59:39.830629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-12-11 13:59:39.830651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.946 [2024-12-11 13:59:39.835809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.835923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.835945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.840971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.841047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.841069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.846235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.846362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.846383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.851458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.851572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.851595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.856711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.856831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.856853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.861899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.861983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.862005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.867090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.867189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.867211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.872360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.872443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.872466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.877480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.877568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.877590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.882599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.882696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.882719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.887823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.887924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.887945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.892999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.893098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.893119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.898169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.898246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.898268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.903329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.903405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.903427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.908480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.908576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.908598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.913747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.913839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.913862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.918965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.919042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.919064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.924135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.924220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.924242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.929283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.929366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.929387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.934532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.934619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.934641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.939663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.939771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.939794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.944908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.944991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.945013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.950042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.950118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.950141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.955205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.955281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.955303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.960286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.960371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.960393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.965438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.965516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.965538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.970537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.970634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.970655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.947 [2024-12-11 13:59:39.975677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.947 [2024-12-11 13:59:39.975772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-12-11 13:59:39.975794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.948 [2024-12-11 13:59:39.980724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.948 [2024-12-11 13:59:39.980804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.948 [2024-12-11 13:59:39.980825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.948 [2024-12-11 13:59:39.985852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.948 [2024-12-11 13:59:39.985939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.948 [2024-12-11 13:59:39.985961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.948 [2024-12-11 13:59:39.990972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:46.948 [2024-12-11 13:59:39.991058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.948 [2024-12-11 13:59:39.991079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.207 [2024-12-11 13:59:39.996099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.207 [2024-12-11 13:59:39.996191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.207 [2024-12-11 13:59:39.996212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.207 [2024-12-11 13:59:40.001220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.207 [2024-12-11 13:59:40.001305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.207 [2024-12-11 13:59:40.001326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.207 [2024-12-11 13:59:40.006376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.207 [2024-12-11 13:59:40.006457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.207 [2024-12-11 13:59:40.006479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.207 [2024-12-11 13:59:40.011422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.207 [2024-12-11 13:59:40.011501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.207 [2024-12-11 13:59:40.011523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.207 [2024-12-11 13:59:40.016566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.207 [2024-12-11 13:59:40.016643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.207 [2024-12-11 13:59:40.016665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.207 [2024-12-11 13:59:40.021618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.207 [2024-12-11 13:59:40.021692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.207 [2024-12-11 13:59:40.021730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.207 [2024-12-11 13:59:40.026752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.207 [2024-12-11 13:59:40.026839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.207 [2024-12-11 13:59:40.026861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.207 [2024-12-11 13:59:40.031828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.207 [2024-12-11 13:59:40.031905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.031927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.036929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.037012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.037033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.042055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.042138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.042159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.047296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.047379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.047400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.052482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.052584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.052606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.057711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.057806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.057828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.062881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.062978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.062999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.068322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.068433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.068454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.073592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.073675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.073697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.078816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.078906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.078927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.083982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.084065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.084086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.089253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.089335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.089356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.094506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.094604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.094625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.099844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.099952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.099973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.105235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.105316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.105338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.110489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.110592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.110623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.115727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.115846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.115868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.120963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.121043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.121065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.126176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.126273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.126295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.131487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.131597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.131628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.136811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.136899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.136921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.142066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.142181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.142202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.147315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.147398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.147420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.152587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.152673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.152709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.157708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.157806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.157827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.162907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.163014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.163039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.168099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.168215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.168237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.173372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.173481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.208 [2024-12-11 13:59:40.173502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.208 [2024-12-11 13:59:40.178676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.208 [2024-12-11 13:59:40.178771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.209 [2024-12-11 13:59:40.178804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.209 [2024-12-11 13:59:40.183901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.209 [2024-12-11 13:59:40.183985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.209 [2024-12-11 13:59:40.184007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.209 [2024-12-11 13:59:40.189033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.209 [2024-12-11 13:59:40.189151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.209 [2024-12-11 13:59:40.189172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.209 [2024-12-11 13:59:40.194314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.209 [2024-12-11 13:59:40.194409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.209 [2024-12-11 13:59:40.194430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.209 [2024-12-11 13:59:40.199490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.209 [2024-12-11 13:59:40.199609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.209 [2024-12-11 13:59:40.199629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.209 [2024-12-11 13:59:40.204784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.209 [2024-12-11 13:59:40.204890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.209 [2024-12-11 13:59:40.204913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.209 [2024-12-11 13:59:40.209919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.209 [2024-12-11 13:59:40.210002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.209 [2024-12-11 13:59:40.210023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.209 [2024-12-11 13:59:40.215088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.209 [2024-12-11 13:59:40.215210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.209 [2024-12-11 13:59:40.215232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.209 [2024-12-11 13:59:40.220448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.209 [2024-12-11 13:59:40.220551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.209 [2024-12-11 13:59:40.220572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.209 [2024-12-11 13:59:40.225434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.209 [2024-12-11 13:59:40.225516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.209 [2024-12-11 13:59:40.225538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.209 [2024-12-11 13:59:40.230591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.209 [2024-12-11 13:59:40.230721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.209 [2024-12-11 13:59:40.230742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.209 [2024-12-11 13:59:40.235914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.209 [2024-12-11 13:59:40.236022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.209 [2024-12-11 13:59:40.236043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.209 [2024-12-11 13:59:40.241068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.209 [2024-12-11 13:59:40.241174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.209 [2024-12-11 13:59:40.241211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.209 [2024-12-11 13:59:40.246323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.209 [2024-12-11 13:59:40.246429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.209 [2024-12-11 13:59:40.246450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.209 [2024-12-11 13:59:40.251579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.209 [2024-12-11 13:59:40.251656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.209 [2024-12-11 13:59:40.251679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.467 [2024-12-11 13:59:40.256663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.467 [2024-12-11 13:59:40.256777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.467 [2024-12-11 13:59:40.256799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.467 [2024-12-11 13:59:40.261961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x702ab0) with pdu=0x200016eff3c8 00:18:47.467 [2024-12-11 13:59:40.262042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.467 [2024-12-11 13:59:40.262064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.467 5875.50 IOPS, 734.44 MiB/s 00:18:47.467 Latency(us) 00:18:47.467 [2024-12-11T13:59:40.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.467 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:47.467 nvme0n1 : 2.00 5874.76 734.35 0.00 0.00 2717.21 1720.32 12809.31 00:18:47.467 [2024-12-11T13:59:40.514Z] =================================================================================================================== 00:18:47.467 [2024-12-11T13:59:40.514Z] Total : 5874.76 734.35 0.00 0.00 2717.21 1720.32 12809.31 00:18:47.467 { 00:18:47.467 "results": [ 00:18:47.467 { 00:18:47.467 "job": "nvme0n1", 00:18:47.467 "core_mask": "0x2", 00:18:47.467 "workload": "randwrite", 00:18:47.467 "status": "finished", 00:18:47.467 "queue_depth": 16, 00:18:47.467 "io_size": 131072, 00:18:47.467 "runtime": 2.004337, 00:18:47.467 "iops": 5874.76058167863, 00:18:47.467 "mibps": 734.3450727098287, 00:18:47.467 "io_failed": 0, 00:18:47.467 "io_timeout": 0, 00:18:47.467 "avg_latency_us": 2717.2071837869134, 00:18:47.467 "min_latency_us": 1720.32, 00:18:47.467 "max_latency_us": 12809.309090909092 00:18:47.467 } 00:18:47.467 ], 00:18:47.467 "core_count": 1 00:18:47.467 } 00:18:47.467 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:47.467 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:47.467 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:47.467 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:47.467 | .driver_specific 00:18:47.467 | .nvme_error 00:18:47.467 | .status_code 00:18:47.467 | .command_transient_transport_error' 00:18:47.725 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 380 > 0 )) 00:18:47.725 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 81776 00:18:47.725 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 81776 ']' 00:18:47.725 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 81776 00:18:47.725 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:47.725 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.725 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81776 00:18:47.725 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:47.725 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:47.725 killing process with pid 81776 00:18:47.725 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81776' 00:18:47.725 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 81776 00:18:47.725 Received shutdown signal, test time was about 2.000000 seconds 00:18:47.725 00:18:47.725 Latency(us) 00:18:47.725 [2024-12-11T13:59:40.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.725 [2024-12-11T13:59:40.772Z] =================================================================================================================== 00:18:47.725 [2024-12-11T13:59:40.772Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:47.725 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 81776 00:18:47.983 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 81599 00:18:47.983 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 81599 ']' 00:18:47.983 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 81599 00:18:47.983 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:47.983 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.983 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81599 00:18:47.983 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:47.983 killing process with pid 81599 00:18:47.983 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:47.983 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81599' 00:18:47.983 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 81599 00:18:47.983 13:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 81599 00:18:48.241 00:18:48.241 real 0m15.512s 00:18:48.241 user 0m30.163s 00:18:48.241 sys 0m4.505s 00:18:48.241 ************************************ 00:18:48.241 END TEST nvmf_digest_error 00:18:48.241 ************************************ 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:48.241 rmmod nvme_tcp 00:18:48.241 rmmod nvme_fabrics 00:18:48.241 rmmod nvme_keyring 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 81599 ']' 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 81599 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 81599 ']' 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 81599 00:18:48.241 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81599) - No such process 00:18:48.241 Process with pid 81599 is not found 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 81599 is not found' 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:48.241 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:48.499 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:48.499 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:48.499 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:48.499 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:48.499 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:48.499 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.499 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.499 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.499 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:18:48.499 00:18:48.499 real 0m34.344s 00:18:48.499 user 1m5.148s 00:18:48.499 sys 0m9.545s 00:18:48.499 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.499 13:59:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:48.499 ************************************ 00:18:48.499 END TEST nvmf_digest 00:18:48.499 ************************************ 00:18:48.499 13:59:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:18:48.499 13:59:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:18:48.499 13:59:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:48.499 13:59:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:48.499 13:59:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.499 13:59:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.499 ************************************ 00:18:48.499 START TEST nvmf_host_multipath 00:18:48.499 ************************************ 00:18:48.499 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:48.758 * Looking for test storage... 00:18:48.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:48.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.758 --rc genhtml_branch_coverage=1 00:18:48.758 --rc genhtml_function_coverage=1 00:18:48.758 --rc genhtml_legend=1 00:18:48.758 --rc geninfo_all_blocks=1 00:18:48.758 --rc geninfo_unexecuted_blocks=1 00:18:48.758 00:18:48.758 ' 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:48.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.758 --rc genhtml_branch_coverage=1 00:18:48.758 --rc genhtml_function_coverage=1 00:18:48.758 --rc genhtml_legend=1 00:18:48.758 --rc geninfo_all_blocks=1 00:18:48.758 --rc geninfo_unexecuted_blocks=1 00:18:48.758 00:18:48.758 ' 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:48.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.758 --rc genhtml_branch_coverage=1 00:18:48.758 --rc genhtml_function_coverage=1 00:18:48.758 --rc genhtml_legend=1 00:18:48.758 --rc geninfo_all_blocks=1 00:18:48.758 --rc geninfo_unexecuted_blocks=1 00:18:48.758 00:18:48.758 ' 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:48.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.758 --rc genhtml_branch_coverage=1 00:18:48.758 --rc genhtml_function_coverage=1 00:18:48.758 --rc genhtml_legend=1 00:18:48.758 --rc geninfo_all_blocks=1 00:18:48.758 --rc geninfo_unexecuted_blocks=1 00:18:48.758 00:18:48.758 ' 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.758 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:48.759 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:48.759 Cannot find device "nvmf_init_br" 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:48.759 Cannot find device "nvmf_init_br2" 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:48.759 Cannot find device "nvmf_tgt_br" 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:48.759 Cannot find device "nvmf_tgt_br2" 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:48.759 Cannot find device "nvmf_init_br" 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:48.759 Cannot find device "nvmf_init_br2" 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:48.759 Cannot find device "nvmf_tgt_br" 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:18:48.759 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:49.018 Cannot find device "nvmf_tgt_br2" 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:49.018 Cannot find device "nvmf_br" 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:49.018 Cannot find device "nvmf_init_if" 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:49.018 Cannot find device "nvmf_init_if2" 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:49.018 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:49.018 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:49.018 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:49.018 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:49.018 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:49.018 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:49.018 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:49.018 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:49.018 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:49.018 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:49.018 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:49.018 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:49.018 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:49.277 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:49.277 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:18:49.277 00:18:49.277 --- 10.0.0.3 ping statistics --- 00:18:49.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.277 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:49.277 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:49.277 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:18:49.277 00:18:49.277 --- 10.0.0.4 ping statistics --- 00:18:49.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.277 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:49.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:49.277 00:18:49.277 --- 10.0.0.1 ping statistics --- 00:18:49.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.277 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:49.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:18:49.277 00:18:49.277 --- 10.0.0.2 ping statistics --- 00:18:49.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.277 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=82087 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 82087 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 82087 ']' 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.277 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:49.277 [2024-12-11 13:59:42.206758] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:49.277 [2024-12-11 13:59:42.206879] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.536 [2024-12-11 13:59:42.360809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:49.536 [2024-12-11 13:59:42.429751] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.536 [2024-12-11 13:59:42.429822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.536 [2024-12-11 13:59:42.429836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.536 [2024-12-11 13:59:42.429846] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.536 [2024-12-11 13:59:42.429855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.536 [2024-12-11 13:59:42.431259] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.536 [2024-12-11 13:59:42.431270] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.536 [2024-12-11 13:59:42.489186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:49.536 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.536 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:18:49.536 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:49.536 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.536 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:49.795 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.795 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=82087 00:18:49.795 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:50.054 [2024-12-11 13:59:42.893107] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.054 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:50.312 Malloc0 00:18:50.312 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:50.570 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:50.829 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:51.103 [2024-12-11 13:59:44.011128] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:51.103 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:51.386 [2024-12-11 13:59:44.279285] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:51.386 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=82135 00:18:51.386 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:51.386 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:51.386 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 82135 /var/tmp/bdevperf.sock 00:18:51.386 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 82135 ']' 00:18:51.386 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.386 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.386 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.386 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.386 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:52.760 13:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.760 13:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:18:52.760 13:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:52.760 13:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:53.018 Nvme0n1 00:18:53.018 13:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:53.585 Nvme0n1 00:18:53.585 13:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:53.585 13:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:54.521 13:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:54.521 13:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:54.779 13:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:55.038 13:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:55.038 13:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82186 00:18:55.038 13:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82087 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:55.038 13:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:01.601 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:01.601 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:01.601 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:01.601 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:01.601 Attaching 4 probes... 00:19:01.601 @path[10.0.0.3, 4421]: 12822 00:19:01.601 @path[10.0.0.3, 4421]: 13220 00:19:01.601 @path[10.0.0.3, 4421]: 13092 00:19:01.601 @path[10.0.0.3, 4421]: 13315 00:19:01.601 @path[10.0.0.3, 4421]: 13818 00:19:01.601 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:01.601 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:01.601 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:01.601 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:01.601 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:01.601 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:01.601 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82186 00:19:01.602 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:01.602 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:19:01.602 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:01.602 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:02.168 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:19:02.168 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82298 00:19:02.169 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:02.169 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82087 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:08.726 14:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:08.726 14:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:08.726 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:08.726 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:08.726 Attaching 4 probes... 00:19:08.726 @path[10.0.0.3, 4420]: 18363 00:19:08.726 @path[10.0.0.3, 4420]: 18826 00:19:08.726 @path[10.0.0.3, 4420]: 18956 00:19:08.726 @path[10.0.0.3, 4420]: 18835 00:19:08.726 @path[10.0.0.3, 4420]: 18703 00:19:08.726 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:08.726 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:08.726 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:08.726 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:08.726 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:08.726 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:08.726 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82298 00:19:08.726 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:08.726 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:19:08.726 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:08.726 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:08.985 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:19:08.985 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82412 00:19:08.985 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82087 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:08.985 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:15.592 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:15.592 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:15.592 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:15.592 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:15.592 Attaching 4 probes... 00:19:15.592 @path[10.0.0.3, 4421]: 14846 00:19:15.592 @path[10.0.0.3, 4421]: 17968 00:19:15.592 @path[10.0.0.3, 4421]: 18146 00:19:15.592 @path[10.0.0.3, 4421]: 18227 00:19:15.592 @path[10.0.0.3, 4421]: 18295 00:19:15.592 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:15.592 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:15.592 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:15.592 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:15.592 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:15.592 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:15.592 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82412 00:19:15.592 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:15.592 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:19:15.593 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:15.593 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:15.851 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:19:15.851 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82530 00:19:15.851 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82087 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:15.851 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:22.413 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:22.413 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:19:22.413 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:19:22.413 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:22.413 Attaching 4 probes... 00:19:22.413 00:19:22.413 00:19:22.413 00:19:22.413 00:19:22.413 00:19:22.413 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:22.413 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:22.413 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:22.413 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:19:22.413 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:19:22.413 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:19:22.413 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82530 00:19:22.413 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:22.413 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:19:22.413 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:22.413 14:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:22.671 14:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:19:22.671 14:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82642 00:19:22.672 14:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82087 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:22.672 14:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:29.245 14:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:29.245 14:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:29.245 14:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:29.245 14:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:29.245 Attaching 4 probes... 00:19:29.245 @path[10.0.0.3, 4421]: 18514 00:19:29.245 @path[10.0.0.3, 4421]: 18762 00:19:29.245 @path[10.0.0.3, 4421]: 18418 00:19:29.245 @path[10.0.0.3, 4421]: 18917 00:19:29.245 @path[10.0.0.3, 4421]: 18680 00:19:29.245 14:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:29.245 14:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:29.245 14:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:29.245 14:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:29.245 14:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:29.245 14:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:29.245 14:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82642 00:19:29.245 14:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:29.245 14:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:29.245 14:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:19:30.181 14:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:30.181 14:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82766 00:19:30.181 14:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82087 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:30.181 14:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:36.741 14:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:36.741 14:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:36.741 14:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:36.741 14:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:36.741 Attaching 4 probes... 00:19:36.741 @path[10.0.0.3, 4420]: 18320 00:19:36.741 @path[10.0.0.3, 4420]: 18409 00:19:36.741 @path[10.0.0.3, 4420]: 18701 00:19:36.741 @path[10.0.0.3, 4420]: 18781 00:19:36.741 @path[10.0.0.3, 4420]: 18776 00:19:36.741 14:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:36.741 14:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:36.741 14:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:36.741 14:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:36.741 14:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:36.741 14:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:36.741 14:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82766 00:19:36.741 14:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:36.741 14:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:36.741 [2024-12-11 14:00:29.685877] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:36.741 14:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:36.999 14:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:43.590 14:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:43.590 14:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82940 00:19:43.590 14:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82087 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:43.590 14:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:50.178 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:50.178 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:50.178 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:50.178 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:50.178 Attaching 4 probes... 00:19:50.178 @path[10.0.0.3, 4421]: 18063 00:19:50.178 @path[10.0.0.3, 4421]: 18512 00:19:50.178 @path[10.0.0.3, 4421]: 17975 00:19:50.178 @path[10.0.0.3, 4421]: 17891 00:19:50.178 @path[10.0.0.3, 4421]: 18158 00:19:50.178 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:50.179 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:50.179 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:50.179 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:50.179 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:50.179 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:50.179 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82940 00:19:50.179 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:50.179 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 82135 00:19:50.179 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 82135 ']' 00:19:50.179 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 82135 00:19:50.179 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:19:50.179 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.179 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82135 00:19:50.179 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:50.179 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:50.179 killing process with pid 82135 00:19:50.179 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82135' 00:19:50.179 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 82135 00:19:50.179 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 82135 00:19:50.179 { 00:19:50.179 "results": [ 00:19:50.179 { 00:19:50.179 "job": "Nvme0n1", 00:19:50.179 "core_mask": "0x4", 00:19:50.179 "workload": "verify", 00:19:50.179 "status": "terminated", 00:19:50.179 "verify_range": { 00:19:50.179 "start": 0, 00:19:50.179 "length": 16384 00:19:50.179 }, 00:19:50.179 "queue_depth": 128, 00:19:50.179 "io_size": 4096, 00:19:50.179 "runtime": 55.783815, 00:19:50.179 "iops": 7490.183308545677, 00:19:50.179 "mibps": 29.258528549006552, 00:19:50.179 "io_failed": 0, 00:19:50.179 "io_timeout": 0, 00:19:50.179 "avg_latency_us": 17060.135119510043, 00:19:50.179 "min_latency_us": 1392.64, 00:19:50.179 "max_latency_us": 7046430.72 00:19:50.179 } 00:19:50.179 ], 00:19:50.179 "core_count": 1 00:19:50.179 } 00:19:50.179 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 82135 00:19:50.179 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:50.179 [2024-12-11 13:59:44.362940] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:19:50.179 [2024-12-11 13:59:44.363113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82135 ] 00:19:50.179 [2024-12-11 13:59:44.515217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.179 [2024-12-11 13:59:44.570352] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.179 [2024-12-11 13:59:44.626630] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:50.179 Running I/O for 90 seconds... 00:19:50.179 6805.00 IOPS, 26.58 MiB/s [2024-12-11T14:00:43.226Z] 6820.00 IOPS, 26.64 MiB/s [2024-12-11T14:00:43.226Z] 6722.67 IOPS, 26.26 MiB/s [2024-12-11T14:00:43.226Z] 6674.25 IOPS, 26.07 MiB/s [2024-12-11T14:00:43.226Z] 6645.00 IOPS, 25.96 MiB/s [2024-12-11T14:00:43.226Z] 6646.83 IOPS, 25.96 MiB/s [2024-12-11T14:00:43.226Z] 6684.71 IOPS, 26.11 MiB/s [2024-12-11T14:00:43.226Z] 6706.50 IOPS, 26.20 MiB/s [2024-12-11T14:00:43.226Z] [2024-12-11 13:59:54.900778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.900852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.900912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.900935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.900964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.900985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.901011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.901030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.901056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.901076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.901103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.901123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.901149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.901168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.901195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.901215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.901242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.901261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.901295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.901343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.901373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.901395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.901422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.901441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.901467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.901487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.901513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.901532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.901558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.901578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.901604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.901624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.901665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.901686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.901729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.901755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.901783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.901804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.901830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.901850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.901877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.901896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.901923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.901955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.901984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.902005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.902031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.179 [2024-12-11 13:59:54.902050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:50.179 [2024-12-11 13:59:54.902077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.179 [2024-12-11 13:59:54.902097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.902124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.180 [2024-12-11 13:59:54.902144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.902170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.180 [2024-12-11 13:59:54.902192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.902220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.180 [2024-12-11 13:59:54.902239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.902266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.180 [2024-12-11 13:59:54.902286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.902312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.180 [2024-12-11 13:59:54.902332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.902359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.180 [2024-12-11 13:59:54.902378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.902405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.180 [2024-12-11 13:59:54.902425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.902476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.902502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.902529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.902550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.902589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.902610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.902637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.902662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.902696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.902734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.902762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.902782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.902809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.902829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.902855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.902874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.902900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.902920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.902946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.902965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.902991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.903011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.903038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.903058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.903085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.903122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.903151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.903171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.903209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.903231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.903257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.903277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.903304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.180 [2024-12-11 13:59:54.903324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.903357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.180 [2024-12-11 13:59:54.903377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.903404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.180 [2024-12-11 13:59:54.903423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.903449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.180 [2024-12-11 13:59:54.903468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.903496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.180 [2024-12-11 13:59:54.903515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.903541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.180 [2024-12-11 13:59:54.903561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.903587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.180 [2024-12-11 13:59:54.903607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.903637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.180 [2024-12-11 13:59:54.903656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.903683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.903716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.903748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.903769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.903796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.903826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.903854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.903875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.903902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.903923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.903949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.903968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.903994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.904014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:50.180 [2024-12-11 13:59:54.904041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.180 [2024-12-11 13:59:54.904060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.904114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.904140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.904169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.904190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.904216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.904235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.904261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.904283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.904309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.904328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.904355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.904374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.904401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.904430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.904460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.904481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.904507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.904526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.904552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.904572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.904599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.904619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.904645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.904665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.904692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.181 [2024-12-11 13:59:54.904729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.904758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.181 [2024-12-11 13:59:54.904778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.904805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.181 [2024-12-11 13:59:54.904825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.904851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.181 [2024-12-11 13:59:54.904871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.904900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.181 [2024-12-11 13:59:54.904920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.904946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.181 [2024-12-11 13:59:54.904966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.904993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.181 [2024-12-11 13:59:54.905012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.905050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.181 [2024-12-11 13:59:54.905072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.905098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.181 [2024-12-11 13:59:54.905117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.905154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.181 [2024-12-11 13:59:54.905175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.905201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.181 [2024-12-11 13:59:54.905221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.905247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.181 [2024-12-11 13:59:54.905267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.905296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.181 [2024-12-11 13:59:54.905316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.905341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.181 [2024-12-11 13:59:54.905361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.905387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.181 [2024-12-11 13:59:54.905407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.905433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.181 [2024-12-11 13:59:54.905453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.905479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.905498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.905525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.905544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.905571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.905590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.905631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.905653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.905687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.905746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.905776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.905796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.905823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.905842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.905869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.905890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.905917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.905936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.905970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.905990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.906017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.906037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:50.181 [2024-12-11 13:59:54.906063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.181 [2024-12-11 13:59:54.906083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.906109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.906129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.906156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.906176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.906202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.182 [2024-12-11 13:59:54.906221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.906248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.182 [2024-12-11 13:59:54.906287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.906316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.182 [2024-12-11 13:59:54.906335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.906362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.182 [2024-12-11 13:59:54.906382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.906409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.182 [2024-12-11 13:59:54.906429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.906455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.182 [2024-12-11 13:59:54.906474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.906502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.182 [2024-12-11 13:59:54.906523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.907902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.182 [2024-12-11 13:59:54.907936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.907970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.907993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.908020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.908040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.908067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.908086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.908115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.908135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.908162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.908181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.908208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.908243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.908273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.908294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.908342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.908366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.908394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.908414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.908441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.908461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.908487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.908506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.908533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.908553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.908579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.908599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.908625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.908644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.908679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.908715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.908753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.908774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.908802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.908822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.908849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.908869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.908908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.908929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.908957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.908977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.909004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.909024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.909050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.909070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.909097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.909117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.909164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.909189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.909217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.909238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.909264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.909286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.909321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.909341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.909368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.909387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.909414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.909434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:50.182 [2024-12-11 13:59:54.909461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.182 [2024-12-11 13:59:54.909480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.909519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.909541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.909981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.910012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.910045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.910066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.910093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.910112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.910139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.910159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.910186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.910205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.910232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.910251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.910278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.910297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.910323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.183 [2024-12-11 13:59:54.910342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.910368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.183 [2024-12-11 13:59:54.910388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.910414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.183 [2024-12-11 13:59:54.910434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.910460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.183 [2024-12-11 13:59:54.910479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.910505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.183 [2024-12-11 13:59:54.910538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.910567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.183 [2024-12-11 13:59:54.910587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.910613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.183 [2024-12-11 13:59:54.910632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.910659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.183 [2024-12-11 13:59:54.910679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.910722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.910748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.910782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.910804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.910850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.910875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.910903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.910922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.910950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.910970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.910996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.911016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.911042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.911062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.911089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.911122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.911151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.911184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.911234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.911261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.911289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.911309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.911335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.911354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.911381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.911401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.911427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.911446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.911472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.911491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.911635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.183 [2024-12-11 13:59:54.911665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.911887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.183 [2024-12-11 13:59:54.911921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:50.183 [2024-12-11 13:59:54.911957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.183 [2024-12-11 13:59:54.911979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.912007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.184 [2024-12-11 13:59:54.912026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.912052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.184 [2024-12-11 13:59:54.912072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.912098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.184 [2024-12-11 13:59:54.912118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.912160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.184 [2024-12-11 13:59:54.912181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.912207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.184 [2024-12-11 13:59:54.912227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.912253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.184 [2024-12-11 13:59:54.912273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.912300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.184 [2024-12-11 13:59:54.912321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.912356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.184 [2024-12-11 13:59:54.912377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.912404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.184 [2024-12-11 13:59:54.912424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.912450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.184 [2024-12-11 13:59:54.912469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.912495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.184 [2024-12-11 13:59:54.912514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.912541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.184 [2024-12-11 13:59:54.912559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.912586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.184 [2024-12-11 13:59:54.912605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.912632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.184 [2024-12-11 13:59:54.912657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.912685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.184 [2024-12-11 13:59:54.912724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.912766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.184 [2024-12-11 13:59:54.912787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.912814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.184 [2024-12-11 13:59:54.912833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.912860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.184 [2024-12-11 13:59:54.912879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.912906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.184 [2024-12-11 13:59:54.912925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.912951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.184 [2024-12-11 13:59:54.912970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.912997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.184 [2024-12-11 13:59:54.913023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.913051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.184 [2024-12-11 13:59:54.913070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.913097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.184 [2024-12-11 13:59:54.913117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.913143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.184 [2024-12-11 13:59:54.913162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.913188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.184 [2024-12-11 13:59:54.913207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.913234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.184 [2024-12-11 13:59:54.913254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.913280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.184 [2024-12-11 13:59:54.913299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.913325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.184 [2024-12-11 13:59:54.913354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.913382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.184 [2024-12-11 13:59:54.913403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.913429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.184 [2024-12-11 13:59:54.913449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.913476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.184 [2024-12-11 13:59:54.913496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.913522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.184 [2024-12-11 13:59:54.913542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.913568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.184 [2024-12-11 13:59:54.913586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.913613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.184 [2024-12-11 13:59:54.913633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.913659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.184 [2024-12-11 13:59:54.913678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.913719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.184 [2024-12-11 13:59:54.913743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.913770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.184 [2024-12-11 13:59:54.913795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.913823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.184 [2024-12-11 13:59:54.913842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.913868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.184 [2024-12-11 13:59:54.913888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:50.184 [2024-12-11 13:59:54.913914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.184 [2024-12-11 13:59:54.913964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.913994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.185 [2024-12-11 13:59:54.914014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.914041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.185 [2024-12-11 13:59:54.914061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.914087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.914106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.924664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.924720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.924758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.924779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.924807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.924826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.924853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.924874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.924910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.924931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.924957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.924977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.925002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.925022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.925049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.925068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.925094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.925113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.925160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.925183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.925210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.925229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.925255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.925274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.925300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.925319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.925345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.185 [2024-12-11 13:59:54.925364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.925391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.185 [2024-12-11 13:59:54.925410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.925437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.185 [2024-12-11 13:59:54.925456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.925482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.185 [2024-12-11 13:59:54.925501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.925527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.185 [2024-12-11 13:59:54.925546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.925572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.185 [2024-12-11 13:59:54.925592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.925617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.185 [2024-12-11 13:59:54.925636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.925663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.185 [2024-12-11 13:59:54.925683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.925738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.925761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.925788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.925807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.925834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.925863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.925889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.925908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.925934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.925954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.925980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.926000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.926026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.926045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.926071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.926090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.926116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.926135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.926162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.926181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.926207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.926226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.926252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.926271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.926297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.926326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.926354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.926374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.926400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.185 [2024-12-11 13:59:54.926419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:50.185 [2024-12-11 13:59:54.926445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.926464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.926490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.926509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.926536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.926554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.926580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.926600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.929441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.929494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.929544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.929575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.929614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.929642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.929680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.929730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.929772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.929800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.929839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.929887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.929929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.929963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.930001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.930028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.930067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.930094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.930131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.930158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.930196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.930223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.930261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.930288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.930326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.930352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.930391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.930418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.930456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.930482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.930520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.930547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.930585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.930613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.930650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.930677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.930755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.930787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.930826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.186 [2024-12-11 13:59:54.930852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.930890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.186 [2024-12-11 13:59:54.930917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.930955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.186 [2024-12-11 13:59:54.930982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.931021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.186 [2024-12-11 13:59:54.931047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.931085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.186 [2024-12-11 13:59:54.931132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.931174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.186 [2024-12-11 13:59:54.931203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.931240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.186 [2024-12-11 13:59:54.931267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.931305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.186 [2024-12-11 13:59:54.931333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.931371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.931398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.931436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.931462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.931500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.931527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.931579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.931608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.931647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.931675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.931742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.931776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.931816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.931843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.931881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.931908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.931946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.186 [2024-12-11 13:59:54.931973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:50.186 [2024-12-11 13:59:54.932010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.932037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.932075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.932101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.932139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.932166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.932204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.932231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.932269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.932295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.932333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.932361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.932398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.932439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.932480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.187 [2024-12-11 13:59:54.932508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.932547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.187 [2024-12-11 13:59:54.932574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.932612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.187 [2024-12-11 13:59:54.932639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.932677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.187 [2024-12-11 13:59:54.932725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.932769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.187 [2024-12-11 13:59:54.932797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.932834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.187 [2024-12-11 13:59:54.932861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.932906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.187 [2024-12-11 13:59:54.932933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.932971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.187 [2024-12-11 13:59:54.932998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.933035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.933062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.933100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.933128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.933165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.933192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.933229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.933269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.933309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.933336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.933374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.933400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.933438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.933466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.933503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.933531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.933569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.933595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.933633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.933660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.933714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.933745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.933783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.933811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.933850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.933877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.933914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.933940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.933978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.934005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.934043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.934069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.934121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.934150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.934188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.934215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.934252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.934278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.934315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.187 [2024-12-11 13:59:54.934342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:50.187 [2024-12-11 13:59:54.934380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.934407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.934444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.934470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.934508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.934535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.934574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.934601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.934638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.934664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.934718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.934751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.934790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.934816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.934855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.934882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.934933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.934962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.935000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.935027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.935065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.935091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.935147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.935175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.935213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.935240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.935277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.935305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.935343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.935370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.935408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.935434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.935472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.188 [2024-12-11 13:59:54.935500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.935593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.188 [2024-12-11 13:59:54.935628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.935667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.188 [2024-12-11 13:59:54.935694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.935759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.188 [2024-12-11 13:59:54.935787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.935826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.188 [2024-12-11 13:59:54.935871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.935911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.188 [2024-12-11 13:59:54.935939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.935977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.188 [2024-12-11 13:59:54.936003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.936040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.188 [2024-12-11 13:59:54.936067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.936103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.188 [2024-12-11 13:59:54.936130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.936167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.188 [2024-12-11 13:59:54.936194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.936231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.188 [2024-12-11 13:59:54.936258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.936294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.188 [2024-12-11 13:59:54.936321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.936359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.188 [2024-12-11 13:59:54.936386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.936423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.188 [2024-12-11 13:59:54.936450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.936488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.936515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.936554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.936581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.936618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.936656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.936714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.936747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.936786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.936812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.936849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.936884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.936923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.936950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.936987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.188 [2024-12-11 13:59:54.937014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.937051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.188 [2024-12-11 13:59:54.937078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.937116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.188 [2024-12-11 13:59:54.937143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:50.188 [2024-12-11 13:59:54.937180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 13:59:54.937206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 13:59:54.937245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 13:59:54.937272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 13:59:54.937309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 13:59:54.937336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 13:59:54.937373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 13:59:54.937399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 13:59:54.937437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 13:59:54.937476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 13:59:54.937517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 13:59:54.937545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 13:59:54.937583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 13:59:54.937610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 13:59:54.937649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 13:59:54.937677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 13:59:54.937733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 13:59:54.937764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 13:59:54.937802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 13:59:54.937830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 13:59:54.937868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 13:59:54.937894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 13:59:54.937931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 13:59:54.937958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 13:59:54.938006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 13:59:54.938034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 13:59:54.938071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 13:59:54.938097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 13:59:54.938135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 13:59:54.938161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 13:59:54.938199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 13:59:54.938227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 13:59:54.940081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 13:59:54.940161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:50.189 6872.56 IOPS, 26.85 MiB/s [2024-12-11T14:00:43.236Z] 7114.10 IOPS, 27.79 MiB/s [2024-12-11T14:00:43.236Z] 7327.00 IOPS, 28.62 MiB/s [2024-12-11T14:00:43.236Z] 7503.08 IOPS, 29.31 MiB/s [2024-12-11T14:00:43.236Z] 7654.54 IOPS, 29.90 MiB/s [2024-12-11T14:00:43.236Z] 7773.50 IOPS, 30.37 MiB/s [2024-12-11T14:00:43.236Z] [2024-12-11 14:00:01.471896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 14:00:01.471991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.472060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 14:00:01.472087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.472116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 14:00:01.472140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.472180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 14:00:01.472227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.472253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 14:00:01.472272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.472298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 14:00:01.472317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.472344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 14:00:01.472363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.472389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 14:00:01.472408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.472435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.189 [2024-12-11 14:00:01.472453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.472480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.189 [2024-12-11 14:00:01.472499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.472525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.189 [2024-12-11 14:00:01.472544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.472570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.189 [2024-12-11 14:00:01.472643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.472673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.189 [2024-12-11 14:00:01.472693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.472718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.189 [2024-12-11 14:00:01.472736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.472781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.189 [2024-12-11 14:00:01.472801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.472828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.189 [2024-12-11 14:00:01.472847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.472893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 14:00:01.472917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.472944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 14:00:01.472965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.472991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 14:00:01.473010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.473035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 14:00:01.473053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.473078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 14:00:01.473097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.473122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.189 [2024-12-11 14:00:01.473140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:50.189 [2024-12-11 14:00:01.473165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.473200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.473227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.473246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.473287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.473308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.473334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.473353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.473380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.473400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.473426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.473446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.473473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.473493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.473520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.473555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.473581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.473599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.473633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.473652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.473677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.190 [2024-12-11 14:00:01.473695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.473721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.190 [2024-12-11 14:00:01.473759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.473789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.190 [2024-12-11 14:00:01.473808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.473834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.190 [2024-12-11 14:00:01.473852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.473917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.190 [2024-12-11 14:00:01.473939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.473966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.190 [2024-12-11 14:00:01.473985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.474011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.190 [2024-12-11 14:00:01.474031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.474057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.190 [2024-12-11 14:00:01.474078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.474104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.190 [2024-12-11 14:00:01.474123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.474150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.190 [2024-12-11 14:00:01.474169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.474195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.190 [2024-12-11 14:00:01.474214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.474241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.190 [2024-12-11 14:00:01.474262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.474289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.190 [2024-12-11 14:00:01.474309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.474338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.190 [2024-12-11 14:00:01.474358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.474384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.190 [2024-12-11 14:00:01.474404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.474431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.190 [2024-12-11 14:00:01.474451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.474499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.474546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.474576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.474598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.474625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.474645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.474671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.474691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.474748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.474772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.474799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.474819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.474845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.474865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.474891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.474910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.474936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.474956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.474982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.475001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.475026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.475045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.475072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.475103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.475154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.475185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.475214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.190 [2024-12-11 14:00:01.475234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:50.190 [2024-12-11 14:00:01.475261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.191 [2024-12-11 14:00:01.475281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.475307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.191 [2024-12-11 14:00:01.475327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.475354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.191 [2024-12-11 14:00:01.475374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.475401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.191 [2024-12-11 14:00:01.475421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.475462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.191 [2024-12-11 14:00:01.475482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.475508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.191 [2024-12-11 14:00:01.475528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.475554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.191 [2024-12-11 14:00:01.475573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.475598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.191 [2024-12-11 14:00:01.475618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.475650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.191 [2024-12-11 14:00:01.475669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.475695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.191 [2024-12-11 14:00:01.475730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.475772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.191 [2024-12-11 14:00:01.475795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.475837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.191 [2024-12-11 14:00:01.475859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.475886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.191 [2024-12-11 14:00:01.475905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.475932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.191 [2024-12-11 14:00:01.475953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.475980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.191 [2024-12-11 14:00:01.475999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.476025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.191 [2024-12-11 14:00:01.476045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.476073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.191 [2024-12-11 14:00:01.476093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.476134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.191 [2024-12-11 14:00:01.476153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.476200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.191 [2024-12-11 14:00:01.476222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.476249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.191 [2024-12-11 14:00:01.476270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.476297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.191 [2024-12-11 14:00:01.476317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.476344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.191 [2024-12-11 14:00:01.476364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.476391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.191 [2024-12-11 14:00:01.476410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.476448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.191 [2024-12-11 14:00:01.476469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.476497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.191 [2024-12-11 14:00:01.476517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.476543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.191 [2024-12-11 14:00:01.476562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.476590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.191 [2024-12-11 14:00:01.476610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.476637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.191 [2024-12-11 14:00:01.476657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.476683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.191 [2024-12-11 14:00:01.476703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.476745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.191 [2024-12-11 14:00:01.476770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.476799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.191 [2024-12-11 14:00:01.476819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.476846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.191 [2024-12-11 14:00:01.476865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.476892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.191 [2024-12-11 14:00:01.476913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.476939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.191 [2024-12-11 14:00:01.476958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.476985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.191 [2024-12-11 14:00:01.477004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.477032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.191 [2024-12-11 14:00:01.477063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.477092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.191 [2024-12-11 14:00:01.477113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:50.191 [2024-12-11 14:00:01.477140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.192 [2024-12-11 14:00:01.477160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.477188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.192 [2024-12-11 14:00:01.477207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.477234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.192 [2024-12-11 14:00:01.477254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.477282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.192 [2024-12-11 14:00:01.477302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.477329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.192 [2024-12-11 14:00:01.477348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.477375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.192 [2024-12-11 14:00:01.477395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.477422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.192 [2024-12-11 14:00:01.477442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.477469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.192 [2024-12-11 14:00:01.477488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.477514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.192 [2024-12-11 14:00:01.477535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.477562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.192 [2024-12-11 14:00:01.477581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.477608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.192 [2024-12-11 14:00:01.477637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.477666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.192 [2024-12-11 14:00:01.477687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.478476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.192 [2024-12-11 14:00:01.478510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.478550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.192 [2024-12-11 14:00:01.478572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.478606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.192 [2024-12-11 14:00:01.478627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.478662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.192 [2024-12-11 14:00:01.478682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.478733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.192 [2024-12-11 14:00:01.478758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.478792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.192 [2024-12-11 14:00:01.478813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.478859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.192 [2024-12-11 14:00:01.478881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.478914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.192 [2024-12-11 14:00:01.478934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.478986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.192 [2024-12-11 14:00:01.479012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.479047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.192 [2024-12-11 14:00:01.479067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.479113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.192 [2024-12-11 14:00:01.479137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.479188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.192 [2024-12-11 14:00:01.479210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.479244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.192 [2024-12-11 14:00:01.479263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.479296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.192 [2024-12-11 14:00:01.479316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.479350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.192 [2024-12-11 14:00:01.479371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.479404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.192 [2024-12-11 14:00:01.479423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:01.479458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.192 [2024-12-11 14:00:01.479479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:50.192 7816.33 IOPS, 30.53 MiB/s [2024-12-11T14:00:43.239Z] 7367.88 IOPS, 28.78 MiB/s [2024-12-11T14:00:43.239Z] 7455.41 IOPS, 29.12 MiB/s [2024-12-11T14:00:43.239Z] 7540.78 IOPS, 29.46 MiB/s [2024-12-11T14:00:43.239Z] 7624.74 IOPS, 29.78 MiB/s [2024-12-11T14:00:43.239Z] 7699.10 IOPS, 30.07 MiB/s [2024-12-11T14:00:43.239Z] 7768.67 IOPS, 30.35 MiB/s [2024-12-11T14:00:43.239Z] 7826.82 IOPS, 30.57 MiB/s [2024-12-11T14:00:43.239Z] [2024-12-11 14:00:08.681219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.192 [2024-12-11 14:00:08.681284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:08.681367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.192 [2024-12-11 14:00:08.681393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:08.681422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.192 [2024-12-11 14:00:08.681443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:08.681469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.192 [2024-12-11 14:00:08.681488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:08.681513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.192 [2024-12-11 14:00:08.681533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:08.681589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.192 [2024-12-11 14:00:08.681626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:08.681651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.192 [2024-12-11 14:00:08.681669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:08.681695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.192 [2024-12-11 14:00:08.681714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:50.192 [2024-12-11 14:00:08.681757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.681780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.681805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.681823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.681848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.681867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.681894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.681912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.681937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.681955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.681981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.682000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.682025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.682043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.682068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.682087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.682113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.193 [2024-12-11 14:00:08.682131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.682158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.193 [2024-12-11 14:00:08.682188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.682216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.193 [2024-12-11 14:00:08.682237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.682263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.193 [2024-12-11 14:00:08.682282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.682307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.193 [2024-12-11 14:00:08.682326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.682352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.193 [2024-12-11 14:00:08.682370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.682396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.193 [2024-12-11 14:00:08.682414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.682441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.193 [2024-12-11 14:00:08.682460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.682486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.193 [2024-12-11 14:00:08.682521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.682548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.193 [2024-12-11 14:00:08.682567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.682593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.193 [2024-12-11 14:00:08.682613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.682640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.193 [2024-12-11 14:00:08.682659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.682685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.193 [2024-12-11 14:00:08.682704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.682746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.193 [2024-12-11 14:00:08.682773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.682805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.193 [2024-12-11 14:00:08.682826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.682852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.193 [2024-12-11 14:00:08.682872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.682905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.682926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.682968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.682988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.683013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.683032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.683057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.683075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.683130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.683154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.683181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.683201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.683227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.683247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.683273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.683292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.683318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.683337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.683365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.683385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.683424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.683445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.683471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.683491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.683517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.683536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.683562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.683581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.683608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.683633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.683658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.193 [2024-12-11 14:00:08.683677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:50.193 [2024-12-11 14:00:08.683703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.683739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.683769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.683789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.683816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.683835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.683861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.683880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.683907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.683927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.683954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.683973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.684010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.684032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.684058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.684077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.684102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.194 [2024-12-11 14:00:08.684122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.684149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.194 [2024-12-11 14:00:08.684169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.684196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.194 [2024-12-11 14:00:08.684218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.684244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.194 [2024-12-11 14:00:08.684263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.684290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.194 [2024-12-11 14:00:08.684309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.684335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.194 [2024-12-11 14:00:08.684354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.684380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.194 [2024-12-11 14:00:08.684400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.684426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.194 [2024-12-11 14:00:08.684446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.684477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.194 [2024-12-11 14:00:08.684499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.684526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.194 [2024-12-11 14:00:08.684546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.684572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.194 [2024-12-11 14:00:08.684601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.684630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.194 [2024-12-11 14:00:08.684650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.684676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.194 [2024-12-11 14:00:08.684695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.684740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.194 [2024-12-11 14:00:08.684770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.684797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.194 [2024-12-11 14:00:08.684817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.684843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.194 [2024-12-11 14:00:08.684862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.684888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.684907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.684935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.684954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.684982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.685001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.685028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.685047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.685073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.685092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.685119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.685138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.685164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.685193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.685226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.685246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.685273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.685292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.685318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.685337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.685364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.685383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.685410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.685430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.685456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.685475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.685502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.685526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.685553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.685573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.685599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.194 [2024-12-11 14:00:08.685619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:50.194 [2024-12-11 14:00:08.685645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.685664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.685691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.685727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.685756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.685776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.685814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.685835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.685860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.685879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.685906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.685924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.685950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.685969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.685995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.686014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.686040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.195 [2024-12-11 14:00:08.686058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.686084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.195 [2024-12-11 14:00:08.686103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.686129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.195 [2024-12-11 14:00:08.686148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.686174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.195 [2024-12-11 14:00:08.686193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.686218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.195 [2024-12-11 14:00:08.686237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.686262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.195 [2024-12-11 14:00:08.686288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.686315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.195 [2024-12-11 14:00:08.686334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.686369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.195 [2024-12-11 14:00:08.686390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.686417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.686436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.686463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.686482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.686508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.686527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.686554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.686573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.686599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.686618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.686644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.686664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.686690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.686724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.686753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.686772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.686799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.686819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.686845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.686863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.686889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.686909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.686935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.686965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.686993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.687012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.687038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.687059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.687086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.687121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.687862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.195 [2024-12-11 14:00:08.687895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.687937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.195 [2024-12-11 14:00:08.687959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.687993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.195 [2024-12-11 14:00:08.688013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.688047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.195 [2024-12-11 14:00:08.688066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.688100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.195 [2024-12-11 14:00:08.688120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.688153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.195 [2024-12-11 14:00:08.688173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.688206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.195 [2024-12-11 14:00:08.688226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.688260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.195 [2024-12-11 14:00:08.688280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.195 [2024-12-11 14:00:08.688333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.195 [2024-12-11 14:00:08.688375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:50.195 7535.57 IOPS, 29.44 MiB/s [2024-12-11T14:00:43.242Z] 7221.58 IOPS, 28.21 MiB/s [2024-12-11T14:00:43.242Z] 6932.72 IOPS, 27.08 MiB/s [2024-12-11T14:00:43.243Z] 6666.08 IOPS, 26.04 MiB/s [2024-12-11T14:00:43.243Z] 6419.19 IOPS, 25.07 MiB/s [2024-12-11T14:00:43.243Z] 6189.93 IOPS, 24.18 MiB/s [2024-12-11T14:00:43.243Z] 5976.48 IOPS, 23.35 MiB/s [2024-12-11T14:00:43.243Z] 6041.83 IOPS, 23.60 MiB/s [2024-12-11T14:00:43.243Z] 6153.26 IOPS, 24.04 MiB/s [2024-12-11T14:00:43.243Z] 6247.97 IOPS, 24.41 MiB/s [2024-12-11T14:00:43.243Z] 6344.94 IOPS, 24.78 MiB/s [2024-12-11T14:00:43.243Z] 6431.03 IOPS, 25.12 MiB/s [2024-12-11T14:00:43.243Z] 6514.03 IOPS, 25.45 MiB/s [2024-12-11T14:00:43.243Z] [2024-12-11 14:00:22.061488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.061551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.061629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.061654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.061680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.061698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.061740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.061762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.061785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.061803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.061826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.061844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.061867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.061884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.061907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.061924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.061948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.061965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.061988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.062005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.062071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.062114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.062154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.062194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.062234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.062275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.062316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.062375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.062416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.062457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.062498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.062538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:27048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.062589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.062634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.196 [2024-12-11 14:00:22.062675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.196 [2024-12-11 14:00:22.062732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.196 [2024-12-11 14:00:22.062779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.196 [2024-12-11 14:00:22.062821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.196 [2024-12-11 14:00:22.062863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.196 [2024-12-11 14:00:22.062904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.196 [2024-12-11 14:00:22.062946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.062972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.196 [2024-12-11 14:00:22.062992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.063048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.196 [2024-12-11 14:00:22.063073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.196 [2024-12-11 14:00:22.063103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.063140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.063174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.063222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.063256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.063289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:27112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.063322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.063356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:27128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.063389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.063438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.063471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.063504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.063536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.063569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.063602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.063642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:26616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.197 [2024-12-11 14:00:22.063676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.197 [2024-12-11 14:00:22.063710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.197 [2024-12-11 14:00:22.063781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.197 [2024-12-11 14:00:22.063817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.197 [2024-12-11 14:00:22.063850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.197 [2024-12-11 14:00:22.063882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.197 [2024-12-11 14:00:22.063915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.197 [2024-12-11 14:00:22.063948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.063980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.063997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.064013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.064030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.064045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.064062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.064077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.064094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.064120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.064139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.064156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.064173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.064207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.064225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.064241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.064258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.064274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.064291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.064307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.064324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.064357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.064376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:27280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.064393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.064411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.064428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.064447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:27296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.064463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.064482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.064498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.064517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.197 [2024-12-11 14:00:22.064534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.064553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.197 [2024-12-11 14:00:22.064570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.064614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.197 [2024-12-11 14:00:22.064631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.197 [2024-12-11 14:00:22.064650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:26696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.198 [2024-12-11 14:00:22.064666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.064684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.198 [2024-12-11 14:00:22.064700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.064717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.198 [2024-12-11 14:00:22.064733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.064782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.198 [2024-12-11 14:00:22.064801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.064819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.198 [2024-12-11 14:00:22.064835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.064852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.198 [2024-12-11 14:00:22.064868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.064886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.198 [2024-12-11 14:00:22.064902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.064919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.198 [2024-12-11 14:00:22.064935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.064953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.198 [2024-12-11 14:00:22.064969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.064987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.198 [2024-12-11 14:00:22.065003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.198 [2024-12-11 14:00:22.065036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.198 [2024-12-11 14:00:22.065079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.198 [2024-12-11 14:00:22.065114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.198 [2024-12-11 14:00:22.065154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.198 [2024-12-11 14:00:22.065189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:27328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.198 [2024-12-11 14:00:22.065223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.198 [2024-12-11 14:00:22.065257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.198 [2024-12-11 14:00:22.065290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.198 [2024-12-11 14:00:22.065323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.198 [2024-12-11 14:00:22.065356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.198 [2024-12-11 14:00:22.065389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.198 [2024-12-11 14:00:22.065422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.198 [2024-12-11 14:00:22.065456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.198 [2024-12-11 14:00:22.065489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.198 [2024-12-11 14:00:22.065531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.198 [2024-12-11 14:00:22.065564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.198 [2024-12-11 14:00:22.065597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.198 [2024-12-11 14:00:22.065631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.198 [2024-12-11 14:00:22.065664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:27440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.198 [2024-12-11 14:00:22.065698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.198 [2024-12-11 14:00:22.065747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.198 [2024-12-11 14:00:22.065780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.198 [2024-12-11 14:00:22.065814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.198 [2024-12-11 14:00:22.065847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.198 [2024-12-11 14:00:22.065880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.198 [2024-12-11 14:00:22.065913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.198 [2024-12-11 14:00:22.065945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.065971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1815290 is same with the state(6) to be set 00:19:50.198 [2024-12-11 14:00:22.065990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.198 [2024-12-11 14:00:22.066003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.198 [2024-12-11 14:00:22.066015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26864 len:8 PRP1 0x0 PRP2 0x0 00:19:50.198 [2024-12-11 14:00:22.066030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.066046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.198 [2024-12-11 14:00:22.066058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.198 [2024-12-11 14:00:22.066085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27448 len:8 PRP1 0x0 PRP2 0x0 00:19:50.198 [2024-12-11 14:00:22.066100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.198 [2024-12-11 14:00:22.066115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.199 [2024-12-11 14:00:22.066127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.199 [2024-12-11 14:00:22.066138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27456 len:8 PRP1 0x0 PRP2 0x0 00:19:50.199 [2024-12-11 14:00:22.066152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.199 [2024-12-11 14:00:22.066168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.199 [2024-12-11 14:00:22.066180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.199 [2024-12-11 14:00:22.066210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27464 len:8 PRP1 0x0 PRP2 0x0 00:19:50.199 [2024-12-11 14:00:22.066225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.199 [2024-12-11 14:00:22.066241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.199 [2024-12-11 14:00:22.066253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.199 [2024-12-11 14:00:22.066265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27472 len:8 PRP1 0x0 PRP2 0x0 00:19:50.199 [2024-12-11 14:00:22.066281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.199 [2024-12-11 14:00:22.066296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.199 [2024-12-11 14:00:22.066308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.199 [2024-12-11 14:00:22.066320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27480 len:8 PRP1 0x0 PRP2 0x0 00:19:50.199 [2024-12-11 14:00:22.066335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.199 [2024-12-11 14:00:22.066351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.199 [2024-12-11 14:00:22.066363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.199 [2024-12-11 14:00:22.066375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27488 len:8 PRP1 0x0 PRP2 0x0 00:19:50.199 [2024-12-11 14:00:22.066390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.199 [2024-12-11 14:00:22.066406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.199 [2024-12-11 14:00:22.066426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.199 [2024-12-11 14:00:22.066439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27496 len:8 PRP1 0x0 PRP2 0x0 00:19:50.199 [2024-12-11 14:00:22.066455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.199 [2024-12-11 14:00:22.066471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.199 [2024-12-11 14:00:22.066483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.199 [2024-12-11 14:00:22.066496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27504 len:8 PRP1 0x0 PRP2 0x0 00:19:50.199 [2024-12-11 14:00:22.066511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.199 [2024-12-11 14:00:22.066527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.199 [2024-12-11 14:00:22.066539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.199 [2024-12-11 14:00:22.066566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27512 len:8 PRP1 0x0 PRP2 0x0 00:19:50.199 [2024-12-11 14:00:22.066581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.199 [2024-12-11 14:00:22.066596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.199 [2024-12-11 14:00:22.066608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.199 [2024-12-11 14:00:22.066620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27520 len:8 PRP1 0x0 PRP2 0x0 00:19:50.199 [2024-12-11 14:00:22.066634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.199 [2024-12-11 14:00:22.066650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.199 [2024-12-11 14:00:22.066662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.199 [2024-12-11 14:00:22.066674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27528 len:8 PRP1 0x0 PRP2 0x0 00:19:50.199 [2024-12-11 14:00:22.066689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.199 [2024-12-11 14:00:22.066704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.199 [2024-12-11 14:00:22.066716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.199 [2024-12-11 14:00:22.066727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27536 len:8 PRP1 0x0 PRP2 0x0 00:19:50.199 [2024-12-11 14:00:22.066771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.199 [2024-12-11 14:00:22.066807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.199 [2024-12-11 14:00:22.066820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.199 [2024-12-11 14:00:22.066833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27544 len:8 PRP1 0x0 PRP2 0x0 00:19:50.199 [2024-12-11 14:00:22.066848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.199 [2024-12-11 14:00:22.066864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.199 [2024-12-11 14:00:22.066877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.199 [2024-12-11 14:00:22.066889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27552 len:8 PRP1 0x0 PRP2 0x0 00:19:50.199 [2024-12-11 14:00:22.066904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.199 [2024-12-11 14:00:22.066932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.199 [2024-12-11 14:00:22.066945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.199 [2024-12-11 14:00:22.066958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27560 len:8 PRP1 0x0 PRP2 0x0 00:19:50.199 [2024-12-11 14:00:22.066973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.199 [2024-12-11 14:00:22.066990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.199 [2024-12-11 14:00:22.067002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.199 [2024-12-11 14:00:22.067015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27568 len:8 PRP1 0x0 PRP2 0x0 00:19:50.199 [2024-12-11 14:00:22.067030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.199 [2024-12-11 14:00:22.068317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:50.199 [2024-12-11 14:00:22.068406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.199 [2024-12-11 14:00:22.068433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.199 [2024-12-11 14:00:22.068467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1785e90 (9): Bad file descriptor 00:19:50.199 [2024-12-11 14:00:22.068940] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:50.199 [2024-12-11 14:00:22.068975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1785e90 with addr=10.0.0.3, port=4421 00:19:50.199 [2024-12-11 14:00:22.068995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1785e90 is same with the state(6) to be set 00:19:50.199 [2024-12-11 14:00:22.069088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1785e90 (9): Bad file descriptor 00:19:50.199 [2024-12-11 14:00:22.069132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:50.199 [2024-12-11 14:00:22.069153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:50.199 [2024-12-11 14:00:22.069170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:50.199 [2024-12-11 14:00:22.069187] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:50.199 [2024-12-11 14:00:22.069204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:50.199 6588.44 IOPS, 25.74 MiB/s [2024-12-11T14:00:43.246Z] 6661.41 IOPS, 26.02 MiB/s [2024-12-11T14:00:43.246Z] 6727.58 IOPS, 26.28 MiB/s [2024-12-11T14:00:43.246Z] 6791.38 IOPS, 26.53 MiB/s [2024-12-11T14:00:43.246Z] 6855.20 IOPS, 26.78 MiB/s [2024-12-11T14:00:43.246Z] 6917.27 IOPS, 27.02 MiB/s [2024-12-11T14:00:43.246Z] 6976.76 IOPS, 27.25 MiB/s [2024-12-11T14:00:43.246Z] 7031.81 IOPS, 27.47 MiB/s [2024-12-11T14:00:43.246Z] 7082.00 IOPS, 27.66 MiB/s [2024-12-11T14:00:43.246Z] 7121.96 IOPS, 27.82 MiB/s [2024-12-11T14:00:43.246Z] [2024-12-11 14:00:32.146227] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:50.199 7166.59 IOPS, 27.99 MiB/s [2024-12-11T14:00:43.246Z] 7206.45 IOPS, 28.15 MiB/s [2024-12-11T14:00:43.246Z] 7243.31 IOPS, 28.29 MiB/s [2024-12-11T14:00:43.246Z] 7277.69 IOPS, 28.43 MiB/s [2024-12-11T14:00:43.246Z] 7312.14 IOPS, 28.56 MiB/s [2024-12-11T14:00:43.246Z] 7346.80 IOPS, 28.70 MiB/s [2024-12-11T14:00:43.246Z] 7383.94 IOPS, 28.84 MiB/s [2024-12-11T14:00:43.246Z] 7414.02 IOPS, 28.96 MiB/s [2024-12-11T14:00:43.246Z] 7442.50 IOPS, 29.07 MiB/s [2024-12-11T14:00:43.246Z] 7472.42 IOPS, 29.19 MiB/s [2024-12-11T14:00:43.246Z] Received shutdown signal, test time was about 55.784620 seconds 00:19:50.199 00:19:50.199 Latency(us) 00:19:50.199 [2024-12-11T14:00:43.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.199 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:50.199 Verification LBA range: start 0x0 length 0x4000 00:19:50.199 Nvme0n1 : 55.78 7490.18 29.26 0.00 0.00 17060.14 1392.64 7046430.72 00:19:50.199 [2024-12-11T14:00:43.246Z] =================================================================================================================== 00:19:50.199 [2024-12-11T14:00:43.246Z] Total : 7490.18 29.26 0.00 0.00 17060.14 1392.64 7046430.72 00:19:50.199 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:50.199 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:50.199 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:50.199 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:50.199 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:50.199 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:19:50.199 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:50.199 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:19:50.200 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:50.200 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:50.200 rmmod nvme_tcp 00:19:50.200 rmmod nvme_fabrics 00:19:50.200 rmmod nvme_keyring 00:19:50.200 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:50.200 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:19:50.200 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:19:50.200 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 82087 ']' 00:19:50.200 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 82087 00:19:50.200 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 82087 ']' 00:19:50.200 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 82087 00:19:50.200 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:19:50.200 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.200 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82087 00:19:50.200 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:50.200 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:50.200 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82087' 00:19:50.200 killing process with pid 82087 00:19:50.200 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 82087 00:19:50.200 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 82087 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:19:50.458 00:19:50.458 real 1m1.988s 00:19:50.458 user 2m51.767s 00:19:50.458 sys 0m18.755s 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:50.458 14:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:50.458 ************************************ 00:19:50.458 END TEST nvmf_host_multipath 00:19:50.458 ************************************ 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.717 ************************************ 00:19:50.717 START TEST nvmf_timeout 00:19:50.717 ************************************ 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:50.717 * Looking for test storage... 00:19:50.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:50.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.717 --rc genhtml_branch_coverage=1 00:19:50.717 --rc genhtml_function_coverage=1 00:19:50.717 --rc genhtml_legend=1 00:19:50.717 --rc geninfo_all_blocks=1 00:19:50.717 --rc geninfo_unexecuted_blocks=1 00:19:50.717 00:19:50.717 ' 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:50.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.717 --rc genhtml_branch_coverage=1 00:19:50.717 --rc genhtml_function_coverage=1 00:19:50.717 --rc genhtml_legend=1 00:19:50.717 --rc geninfo_all_blocks=1 00:19:50.717 --rc geninfo_unexecuted_blocks=1 00:19:50.717 00:19:50.717 ' 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:50.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.717 --rc genhtml_branch_coverage=1 00:19:50.717 --rc genhtml_function_coverage=1 00:19:50.717 --rc genhtml_legend=1 00:19:50.717 --rc geninfo_all_blocks=1 00:19:50.717 --rc geninfo_unexecuted_blocks=1 00:19:50.717 00:19:50.717 ' 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:50.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.717 --rc genhtml_branch_coverage=1 00:19:50.717 --rc genhtml_function_coverage=1 00:19:50.717 --rc genhtml_legend=1 00:19:50.717 --rc geninfo_all_blocks=1 00:19:50.717 --rc geninfo_unexecuted_blocks=1 00:19:50.717 00:19:50.717 ' 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.717 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:50.718 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:50.718 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:50.977 Cannot find device "nvmf_init_br" 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:50.977 Cannot find device "nvmf_init_br2" 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:50.977 Cannot find device "nvmf_tgt_br" 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:50.977 Cannot find device "nvmf_tgt_br2" 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:50.977 Cannot find device "nvmf_init_br" 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:50.977 Cannot find device "nvmf_init_br2" 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:50.977 Cannot find device "nvmf_tgt_br" 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:50.977 Cannot find device "nvmf_tgt_br2" 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:50.977 Cannot find device "nvmf_br" 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:50.977 Cannot find device "nvmf_init_if" 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:50.977 Cannot find device "nvmf_init_if2" 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:50.977 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:50.977 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:50.977 14:00:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:50.977 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:50.977 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:50.977 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:50.977 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:50.977 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:51.235 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:51.235 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:51.235 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:51.235 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:51.235 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:51.235 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:51.235 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:51.235 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:51.235 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:51.235 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:51.235 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:51.235 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:51.235 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:51.235 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:51.235 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:51.235 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:51.235 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:51.235 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:51.236 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:51.236 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:19:51.236 00:19:51.236 --- 10.0.0.3 ping statistics --- 00:19:51.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.236 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:51.236 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:51.236 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:19:51.236 00:19:51.236 --- 10.0.0.4 ping statistics --- 00:19:51.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.236 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:51.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:19:51.236 00:19:51.236 --- 10.0.0.1 ping statistics --- 00:19:51.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.236 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:51.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:19:51.236 00:19:51.236 --- 10.0.0.2 ping statistics --- 00:19:51.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.236 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=83305 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 83305 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 83305 ']' 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.236 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:51.236 [2024-12-11 14:00:44.248179] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:19:51.236 [2024-12-11 14:00:44.248290] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.494 [2024-12-11 14:00:44.404135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:51.494 [2024-12-11 14:00:44.458574] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.494 [2024-12-11 14:00:44.458647] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.494 [2024-12-11 14:00:44.458669] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.494 [2024-12-11 14:00:44.458679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.494 [2024-12-11 14:00:44.458688] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.494 [2024-12-11 14:00:44.459993] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.494 [2024-12-11 14:00:44.460007] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.494 [2024-12-11 14:00:44.518725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:51.752 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.752 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:51.752 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:51.752 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:51.752 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:51.752 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.752 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:51.752 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:52.010 [2024-12-11 14:00:44.915828] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.010 14:00:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:52.275 Malloc0 00:19:52.275 14:00:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:52.559 14:00:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:52.826 14:00:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:53.093 [2024-12-11 14:00:46.084001] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:53.093 14:00:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=83348 00:19:53.093 14:00:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:53.093 14:00:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 83348 /var/tmp/bdevperf.sock 00:19:53.093 14:00:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 83348 ']' 00:19:53.093 14:00:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.093 14:00:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.093 14:00:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.093 14:00:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.093 14:00:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:53.359 [2024-12-11 14:00:46.165328] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:19:53.359 [2024-12-11 14:00:46.165438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83348 ] 00:19:53.359 [2024-12-11 14:00:46.311964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.359 [2024-12-11 14:00:46.376694] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.627 [2024-12-11 14:00:46.432601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:54.196 14:00:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.196 14:00:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:54.196 14:00:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:54.454 14:00:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:55.020 NVMe0n1 00:19:55.020 14:00:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=83376 00:19:55.020 14:00:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:55.020 14:00:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:55.020 Running I/O for 10 seconds... 00:19:55.952 14:00:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:56.212 7218.00 IOPS, 28.20 MiB/s [2024-12-11T14:00:49.259Z] [2024-12-11 14:00:49.034789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.212 [2024-12-11 14:00:49.034855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.212 [2024-12-11 14:00:49.034879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.212 [2024-12-11 14:00:49.034981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.034999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.035009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.035020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.035576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.035597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.035608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.035619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.035629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.035640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.035649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.035661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.035670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.035682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.213 [2024-12-11 14:00:49.035691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.035717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.035728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.035740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.035750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.035761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.035771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.035782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.035791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.035802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.035811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.035822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.035831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.035842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.035850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.035861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.035870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.035883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.035892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.035903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.036333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.036362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.036374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.036385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.036395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.036406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.036415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.036426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.036435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.036446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.036455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.036466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.036475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.036486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.036495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.036628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.036859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.036875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.036885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.036897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.036907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.036919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.036928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.036939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.036948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.036959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.036968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.036980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.036989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.037284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.037309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.037322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.037332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.037343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.037353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.037364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.037374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.037385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.037394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.037405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.037414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.037425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.037434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.037745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.037772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.213 [2024-12-11 14:00:49.037786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.213 [2024-12-11 14:00:49.037796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.037807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.037817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.037828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.037838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.037849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.037858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.037869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.037878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.037889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.037898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.038191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.038210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.038222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.038233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.038245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.038255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.038266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.038275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.038287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.038296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.038308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.038317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.038327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.038621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.038638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.038648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.038659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.038668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.038679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.038688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.038711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.038722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.038733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.038843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.038859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.038869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.039020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.039429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.039512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.039525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.039536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.039545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.039556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.039565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.039794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.039807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.039819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.039829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.039840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.039849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.039860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.039870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.039880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.039889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.039900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.040192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.040215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.040226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.040237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.040246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.040257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.040266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.040278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.040287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.040298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.040307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.040318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.040327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.040715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.040728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.040739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.040749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.040759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.040769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.040780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.040789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.040800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.040809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.040820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.214 [2024-12-11 14:00:49.040829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.214 [2024-12-11 14:00:49.040840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.041324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.041352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.041363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.041374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.041384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.041395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.041405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.041416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.041425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.041436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.041445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.041456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.041465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.041760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.041783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.041796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.041806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.041818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.041827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.041839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.041847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.041858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.041867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.041879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.041888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.041899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.042238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.042255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.042265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.042276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.042286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.042297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.042306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.042318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.042444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.042465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.042677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.042714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.042726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.042737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.042746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.042757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.042766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.042777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.042786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.042797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.042806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.042817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.043200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.043215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.043224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.043241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.043250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.043261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.043271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.043281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.043290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.043300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.215 [2024-12-11 14:00:49.043309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.043319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09100 is same with the state(6) to be set 00:19:56.215 [2024-12-11 14:00:49.043603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.215 [2024-12-11 14:00:49.043667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.215 [2024-12-11 14:00:49.043677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66816 len:8 PRP1 0x0 PRP2 0x0 00:19:56.215 [2024-12-11 14:00:49.043686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.043712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.215 [2024-12-11 14:00:49.043722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.215 [2024-12-11 14:00:49.043730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66840 len:8 PRP1 0x0 PRP2 0x0 00:19:56.215 [2024-12-11 14:00:49.043740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.043749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.215 [2024-12-11 14:00:49.043756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.215 [2024-12-11 14:00:49.043764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66848 len:8 PRP1 0x0 PRP2 0x0 00:19:56.215 [2024-12-11 14:00:49.043773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.043783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.215 [2024-12-11 14:00:49.043790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.215 [2024-12-11 14:00:49.043797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66856 len:8 PRP1 0x0 PRP2 0x0 00:19:56.215 [2024-12-11 14:00:49.043805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.044141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.215 [2024-12-11 14:00:49.044152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.215 [2024-12-11 14:00:49.044161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66864 len:8 PRP1 0x0 PRP2 0x0 00:19:56.215 [2024-12-11 14:00:49.044170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.044179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.215 [2024-12-11 14:00:49.044186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.215 [2024-12-11 14:00:49.044194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66872 len:8 PRP1 0x0 PRP2 0x0 00:19:56.215 [2024-12-11 14:00:49.044204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.215 [2024-12-11 14:00:49.044213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.215 [2024-12-11 14:00:49.044220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.216 [2024-12-11 14:00:49.044228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66880 len:8 PRP1 0x0 PRP2 0x0 00:19:56.216 [2024-12-11 14:00:49.044620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.216 [2024-12-11 14:00:49.044640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.216 [2024-12-11 14:00:49.044648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.216 [2024-12-11 14:00:49.044656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66888 len:8 PRP1 0x0 PRP2 0x0 00:19:56.216 [2024-12-11 14:00:49.044666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.216 [2024-12-11 14:00:49.044675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.216 [2024-12-11 14:00:49.044682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.216 [2024-12-11 14:00:49.044690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66896 len:8 PRP1 0x0 PRP2 0x0 00:19:56.216 [2024-12-11 14:00:49.044711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.216 [2024-12-11 14:00:49.044722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.216 [2024-12-11 14:00:49.044730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.216 [2024-12-11 14:00:49.044738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66904 len:8 PRP1 0x0 PRP2 0x0 00:19:56.216 [2024-12-11 14:00:49.044747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.216 [2024-12-11 14:00:49.044852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.216 [2024-12-11 14:00:49.044861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.216 [2024-12-11 14:00:49.044870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66912 len:8 PRP1 0x0 PRP2 0x0 00:19:56.216 [2024-12-11 14:00:49.044879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.216 [2024-12-11 14:00:49.044888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.216 [2024-12-11 14:00:49.044895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.216 [2024-12-11 14:00:49.044904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66920 len:8 PRP1 0x0 PRP2 0x0 00:19:56.216 [2024-12-11 14:00:49.044913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.216 [2024-12-11 14:00:49.045041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.216 [2024-12-11 14:00:49.045051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.216 [2024-12-11 14:00:49.045059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66928 len:8 PRP1 0x0 PRP2 0x0 00:19:56.216 [2024-12-11 14:00:49.045321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.216 [2024-12-11 14:00:49.045514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.216 [2024-12-11 14:00:49.045524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.216 [2024-12-11 14:00:49.045532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66936 len:8 PRP1 0x0 PRP2 0x0 00:19:56.216 [2024-12-11 14:00:49.045541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.216 [2024-12-11 14:00:49.045551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.216 [2024-12-11 14:00:49.045566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.216 [2024-12-11 14:00:49.045574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66944 len:8 PRP1 0x0 PRP2 0x0 00:19:56.216 [2024-12-11 14:00:49.045583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.216 [2024-12-11 14:00:49.045592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.216 [2024-12-11 14:00:49.045828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.216 [2024-12-11 14:00:49.045841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66952 len:8 PRP1 0x0 PRP2 0x0 00:19:56.216 [2024-12-11 14:00:49.045852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.216 [2024-12-11 14:00:49.046227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.216 [2024-12-11 14:00:49.046255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.216 [2024-12-11 14:00:49.046267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.216 [2024-12-11 14:00:49.046276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.216 [2024-12-11 14:00:49.046286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.216 [2024-12-11 14:00:49.046295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.216 [2024-12-11 14:00:49.046305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.216 [2024-12-11 14:00:49.046314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.216 [2024-12-11 14:00:49.046322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9b070 is same with the state(6) to be set 00:19:56.216 [2024-12-11 14:00:49.046904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:56.216 [2024-12-11 14:00:49.046944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9b070 (9): Bad file descriptor 00:19:56.216 [2024-12-11 14:00:49.047261] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.216 [2024-12-11 14:00:49.047296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9b070 with addr=10.0.0.3, port=4420 00:19:56.216 [2024-12-11 14:00:49.047309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9b070 is same with the state(6) to be set 00:19:56.216 [2024-12-11 14:00:49.047332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9b070 (9): Bad file descriptor 00:19:56.216 [2024-12-11 14:00:49.047349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:56.216 [2024-12-11 14:00:49.047358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:56.216 [2024-12-11 14:00:49.047666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:56.216 [2024-12-11 14:00:49.047680] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:56.216 [2024-12-11 14:00:49.047692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:56.216 14:00:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:58.137 4121.00 IOPS, 16.10 MiB/s [2024-12-11T14:00:51.184Z] 2747.33 IOPS, 10.73 MiB/s [2024-12-11T14:00:51.184Z] [2024-12-11 14:00:51.048062] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.137 [2024-12-11 14:00:51.048152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9b070 with addr=10.0.0.3, port=4420 00:19:58.137 [2024-12-11 14:00:51.048167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9b070 is same with the state(6) to be set 00:19:58.137 [2024-12-11 14:00:51.048206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9b070 (9): Bad file descriptor 00:19:58.137 [2024-12-11 14:00:51.048226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:58.137 [2024-12-11 14:00:51.048236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:58.137 [2024-12-11 14:00:51.048247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:58.137 [2024-12-11 14:00:51.048259] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:58.137 [2024-12-11 14:00:51.048270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:58.137 14:00:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:58.137 14:00:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:58.137 14:00:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:58.395 14:00:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:58.395 14:00:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:58.395 14:00:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:58.395 14:00:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:58.652 14:00:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:58.652 14:00:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:59.844 2060.50 IOPS, 8.05 MiB/s [2024-12-11T14:00:53.149Z] 1648.40 IOPS, 6.44 MiB/s [2024-12-11T14:00:53.149Z] [2024-12-11 14:00:53.048388] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.102 [2024-12-11 14:00:53.048458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9b070 with addr=10.0.0.3, port=4420 00:20:00.102 [2024-12-11 14:00:53.048490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9b070 is same with the state(6) to be set 00:20:00.102 [2024-12-11 14:00:53.048514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9b070 (9): Bad file descriptor 00:20:00.102 [2024-12-11 14:00:53.048533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:00.102 [2024-12-11 14:00:53.048543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:00.102 [2024-12-11 14:00:53.048554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:00.102 [2024-12-11 14:00:53.048566] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:00.102 [2024-12-11 14:00:53.048576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:02.008 1373.67 IOPS, 5.37 MiB/s [2024-12-11T14:00:55.055Z] 1177.43 IOPS, 4.60 MiB/s [2024-12-11T14:00:55.055Z] [2024-12-11 14:00:55.048625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:02.008 [2024-12-11 14:00:55.048693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:02.008 [2024-12-11 14:00:55.048718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:02.008 [2024-12-11 14:00:55.048729] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:20:02.008 [2024-12-11 14:00:55.048741] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:03.198 1030.25 IOPS, 4.02 MiB/s 00:20:03.198 Latency(us) 00:20:03.198 [2024-12-11T14:00:56.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.198 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:03.198 Verification LBA range: start 0x0 length 0x4000 00:20:03.198 NVMe0n1 : 8.16 1009.80 3.94 15.68 0.00 124839.36 3872.58 7046430.72 00:20:03.198 [2024-12-11T14:00:56.245Z] =================================================================================================================== 00:20:03.198 [2024-12-11T14:00:56.245Z] Total : 1009.80 3.94 15.68 0.00 124839.36 3872.58 7046430.72 00:20:03.198 { 00:20:03.198 "results": [ 00:20:03.198 { 00:20:03.198 "job": "NVMe0n1", 00:20:03.198 "core_mask": "0x4", 00:20:03.198 "workload": "verify", 00:20:03.198 "status": "finished", 00:20:03.198 "verify_range": { 00:20:03.198 "start": 0, 00:20:03.198 "length": 16384 00:20:03.198 }, 00:20:03.198 "queue_depth": 128, 00:20:03.198 "io_size": 4096, 00:20:03.198 "runtime": 8.162016, 00:20:03.198 "iops": 1009.799539721559, 00:20:03.198 "mibps": 3.9445294520373397, 00:20:03.198 "io_failed": 128, 00:20:03.198 "io_timeout": 0, 00:20:03.198 "avg_latency_us": 124839.36232866299, 00:20:03.198 "min_latency_us": 3872.581818181818, 00:20:03.198 "max_latency_us": 7046430.72 00:20:03.198 } 00:20:03.198 ], 00:20:03.198 "core_count": 1 00:20:03.198 } 00:20:03.764 14:00:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:20:03.764 14:00:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:03.764 14:00:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:04.021 14:00:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:20:04.022 14:00:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:20:04.022 14:00:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:04.022 14:00:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:04.280 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:20:04.280 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 83376 00:20:04.280 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 83348 00:20:04.280 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 83348 ']' 00:20:04.280 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 83348 00:20:04.280 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:20:04.280 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.280 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83348 00:20:04.280 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:04.280 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:04.280 killing process with pid 83348 00:20:04.280 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83348' 00:20:04.280 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 83348 00:20:04.280 Received shutdown signal, test time was about 9.364509 seconds 00:20:04.280 00:20:04.280 Latency(us) 00:20:04.280 [2024-12-11T14:00:57.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.280 [2024-12-11T14:00:57.327Z] =================================================================================================================== 00:20:04.280 [2024-12-11T14:00:57.327Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.280 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 83348 00:20:04.538 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:04.797 [2024-12-11 14:00:57.741271] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:04.797 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=83500 00:20:04.797 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:04.797 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 83500 /var/tmp/bdevperf.sock 00:20:04.797 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 83500 ']' 00:20:04.797 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.797 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.797 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.797 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.797 14:00:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:04.797 [2024-12-11 14:00:57.807667] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:20:04.797 [2024-12-11 14:00:57.807778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83500 ] 00:20:05.069 [2024-12-11 14:00:57.949653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.069 [2024-12-11 14:00:58.011224] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.069 [2024-12-11 14:00:58.067160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:06.018 14:00:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.018 14:00:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:06.018 14:00:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:06.276 14:00:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:20:06.534 NVMe0n1 00:20:06.534 14:00:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=83518 00:20:06.534 14:00:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:06.534 14:00:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:20:06.792 Running I/O for 10 seconds... 00:20:07.726 14:01:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:07.986 6939.00 IOPS, 27.11 MiB/s [2024-12-11T14:01:01.033Z] [2024-12-11 14:01:00.797950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.986 [2024-12-11 14:01:00.798010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.986 [2024-12-11 14:01:00.798034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.986 [2024-12-11 14:01:00.798046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.986 [2024-12-11 14:01:00.798058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.986 [2024-12-11 14:01:00.798068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.986 [2024-12-11 14:01:00.798079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.986 [2024-12-11 14:01:00.798088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.986 [2024-12-11 14:01:00.798099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.986 [2024-12-11 14:01:00.798108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.986 [2024-12-11 14:01:00.798119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.986 [2024-12-11 14:01:00.798128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.986 [2024-12-11 14:01:00.798139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.986 [2024-12-11 14:01:00.798149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.986 [2024-12-11 14:01:00.798160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.986 [2024-12-11 14:01:00.798169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.986 [2024-12-11 14:01:00.798180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.986 [2024-12-11 14:01:00.798189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.986 [2024-12-11 14:01:00.798200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.986 [2024-12-11 14:01:00.798209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.986 [2024-12-11 14:01:00.798220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.986 [2024-12-11 14:01:00.798230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.986 [2024-12-11 14:01:00.798241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.986 [2024-12-11 14:01:00.798250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.986 [2024-12-11 14:01:00.798738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.986 [2024-12-11 14:01:00.798750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.986 [2024-12-11 14:01:00.798761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.986 [2024-12-11 14:01:00.798770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.986 [2024-12-11 14:01:00.798782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.986 [2024-12-11 14:01:00.798792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.986 [2024-12-11 14:01:00.798804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.986 [2024-12-11 14:01:00.798813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.986 [2024-12-11 14:01:00.798824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.986 [2024-12-11 14:01:00.798833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.986 [2024-12-11 14:01:00.798845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.986 [2024-12-11 14:01:00.798854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.986 [2024-12-11 14:01:00.798865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.798875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.799225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.799239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.799251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.799260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.799271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.799280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.799292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.799301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.799312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.799320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.799332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.799340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.799633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.799655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.799668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.799678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.799692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.799714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.799727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.799736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.799747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.799757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.799768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.987 [2024-12-11 14:01:00.799777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.799788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.987 [2024-12-11 14:01:00.799798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.799809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.800160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.800187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.800197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.800209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.800218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.800229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.800238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.800249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.800258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.800269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.800278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.800289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.800298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.800581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.987 [2024-12-11 14:01:00.800593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.800604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.800614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.800626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.800636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.800647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.800656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.800667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.800676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.800687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.800856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.801126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.801148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.801160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.801170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.801182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.801191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.801202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.801212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.801223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.801232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.801243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.801252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.801531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.801551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.801563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.801573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.801584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.801593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.801605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.801614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.801625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.801634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.801645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.801655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.801941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.801963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.987 [2024-12-11 14:01:00.801975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.987 [2024-12-11 14:01:00.801985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.801997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.802006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.802017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.802026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.802037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.802047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.802058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.802066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.802339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.802358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.802371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.802381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.802392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.802401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.802413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.802432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.802443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.802452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.802463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.802472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.802745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.802767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.802779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.802788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.802799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.802809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.802821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.802830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.802841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.802850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.802861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.802870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.803005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.803018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.803298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.803312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.803325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.803334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.803346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.803355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.803366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.803375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.803386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.803395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.803547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.803792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.803820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.803830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.803843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.803853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.803864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.803873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.803883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.803893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.803904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.803913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.804152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.804165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.804178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.804187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.804198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.804207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.804218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.804228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.804239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.804248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.804474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.804496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.804510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.804519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.804531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.804540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.804551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.804560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.804571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.804580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.804590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.804599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.804852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.988 [2024-12-11 14:01:00.804864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.988 [2024-12-11 14:01:00.804876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.804885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.804896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.804905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.804916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.804925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.804936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.804945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.805171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.805194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.805207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.805216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.805227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.805236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.805248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.805257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.805268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.805277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.805287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.805296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.805512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.805525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.805536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.805546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.805557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.805566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.805577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.805587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.805724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.805741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.805753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.805846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.805863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.805873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.805884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.805893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.806146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.806169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.806182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.806191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.806202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.806220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.806232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.806241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.806251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.806260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.806272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.806516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.806529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.806539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.806550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.806560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.806689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.806842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.806971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.989 [2024-12-11 14:01:00.806983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.806994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e100 is same with the state(6) to be set 00:20:07.989 [2024-12-11 14:01:00.807261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.989 [2024-12-11 14:01:00.807279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.989 [2024-12-11 14:01:00.807288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62536 len:8 PRP1 0x0 PRP2 0x0 00:20:07.989 [2024-12-11 14:01:00.807297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.807651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.989 [2024-12-11 14:01:00.807680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.807692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.989 [2024-12-11 14:01:00.807717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.807729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.989 [2024-12-11 14:01:00.807738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.807748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.989 [2024-12-11 14:01:00.807989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.989 [2024-12-11 14:01:00.808000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0070 is same with the state(6) to be set 00:20:07.989 [2024-12-11 14:01:00.808444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:07.989 [2024-12-11 14:01:00.808480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af0070 (9): Bad file descriptor 00:20:07.989 [2024-12-11 14:01:00.808812] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.989 [2024-12-11 14:01:00.808845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af0070 with addr=10.0.0.3, port=4420 00:20:07.989 [2024-12-11 14:01:00.808857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0070 is same with the state(6) to be set 00:20:07.989 [2024-12-11 14:01:00.808877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af0070 (9): Bad file descriptor 00:20:07.989 [2024-12-11 14:01:00.809167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:07.989 [2024-12-11 14:01:00.809192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:07.989 [2024-12-11 14:01:00.809203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:07.989 [2024-12-11 14:01:00.809214] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:07.989 [2024-12-11 14:01:00.809224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:07.989 14:01:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:20:08.924 3854.00 IOPS, 15.05 MiB/s [2024-12-11T14:01:01.971Z] [2024-12-11 14:01:01.809559] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.924 [2024-12-11 14:01:01.809613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af0070 with addr=10.0.0.3, port=4420 00:20:08.924 [2024-12-11 14:01:01.809629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0070 is same with the state(6) to be set 00:20:08.924 [2024-12-11 14:01:01.809654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af0070 (9): Bad file descriptor 00:20:08.924 [2024-12-11 14:01:01.809674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:08.924 [2024-12-11 14:01:01.809684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:08.924 [2024-12-11 14:01:01.809695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:08.924 [2024-12-11 14:01:01.809718] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:08.924 [2024-12-11 14:01:01.809742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:08.924 14:01:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:09.182 [2024-12-11 14:01:02.133400] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:09.182 14:01:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 83518 00:20:10.007 2569.33 IOPS, 10.04 MiB/s [2024-12-11T14:01:03.054Z] [2024-12-11 14:01:02.829623] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:11.879 1927.00 IOPS, 7.53 MiB/s [2024-12-11T14:01:05.862Z] 2774.60 IOPS, 10.84 MiB/s [2024-12-11T14:01:06.798Z] 3485.50 IOPS, 13.62 MiB/s [2024-12-11T14:01:07.734Z] 3993.29 IOPS, 15.60 MiB/s [2024-12-11T14:01:09.109Z] 4374.00 IOPS, 17.09 MiB/s [2024-12-11T14:01:10.045Z] 4656.11 IOPS, 18.19 MiB/s [2024-12-11T14:01:10.045Z] 4868.90 IOPS, 19.02 MiB/s 00:20:16.998 Latency(us) 00:20:16.998 [2024-12-11T14:01:10.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.998 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:16.998 Verification LBA range: start 0x0 length 0x4000 00:20:16.998 NVMe0n1 : 10.02 4873.98 19.04 0.00 0.00 26231.31 3515.11 3035150.89 00:20:16.998 [2024-12-11T14:01:10.045Z] =================================================================================================================== 00:20:16.998 [2024-12-11T14:01:10.045Z] Total : 4873.98 19.04 0.00 0.00 26231.31 3515.11 3035150.89 00:20:16.998 { 00:20:16.998 "results": [ 00:20:16.998 { 00:20:16.998 "job": "NVMe0n1", 00:20:16.998 "core_mask": "0x4", 00:20:16.998 "workload": "verify", 00:20:16.998 "status": "finished", 00:20:16.998 "verify_range": { 00:20:16.998 "start": 0, 00:20:16.998 "length": 16384 00:20:16.998 }, 00:20:16.998 "queue_depth": 128, 00:20:16.998 "io_size": 4096, 00:20:16.998 "runtime": 10.015624, 00:20:16.998 "iops": 4873.984886014092, 00:20:16.998 "mibps": 19.039003460992546, 00:20:16.998 "io_failed": 0, 00:20:16.998 "io_timeout": 0, 00:20:16.998 "avg_latency_us": 26231.313956079975, 00:20:16.998 "min_latency_us": 3515.112727272727, 00:20:16.998 "max_latency_us": 3035150.8945454545 00:20:16.998 } 00:20:16.998 ], 00:20:16.998 "core_count": 1 00:20:16.998 } 00:20:16.998 14:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=83628 00:20:16.998 14:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:20:16.998 14:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:16.998 Running I/O for 10 seconds... 00:20:17.933 14:01:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:18.193 6932.00 IOPS, 27.08 MiB/s [2024-12-11T14:01:11.240Z] [2024-12-11 14:01:10.985671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26297a0 is same with the state(6) to be set 00:20:18.193 [2024-12-11 14:01:10.985743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26297a0 is same with the state(6) to be set 00:20:18.193 [2024-12-11 14:01:10.985755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26297a0 is same with the state(6) to be set 00:20:18.193 [2024-12-11 14:01:10.986596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.193 [2024-12-11 14:01:10.986675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.193 [2024-12-11 14:01:10.986716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.193 [2024-12-11 14:01:10.986731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.193 [2024-12-11 14:01:10.986744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.193 [2024-12-11 14:01:10.986753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.193 [2024-12-11 14:01:10.986766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.193 [2024-12-11 14:01:10.987032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.193 [2024-12-11 14:01:10.987060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.987071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.987083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.987105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.987118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.987128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.987140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.987150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.987304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.987394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.987408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.987420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.987432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.987442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.987454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.987463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.987475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.194 [2024-12-11 14:01:10.987484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.987907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.194 [2024-12-11 14:01:10.987935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.987949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.194 [2024-12-11 14:01:10.987959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.987971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.194 [2024-12-11 14:01:10.987980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.987993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.194 [2024-12-11 14:01:10.988002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.988014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.194 [2024-12-11 14:01:10.988024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.988252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.194 [2024-12-11 14:01:10.988266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.988279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.194 [2024-12-11 14:01:10.988288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.988300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.988310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.988321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.988331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.988477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.988580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.988596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.988606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.988619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.988630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.988641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.988653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.988664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.988914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.988932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.988943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.989468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.194 [2024-12-11 14:01:10.989490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.989502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.194 [2024-12-11 14:01:10.989513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.989525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.194 [2024-12-11 14:01:10.989534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.989546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.194 [2024-12-11 14:01:10.989556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.989568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.194 [2024-12-11 14:01:10.989814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.989838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.194 [2024-12-11 14:01:10.989849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.989861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.194 [2024-12-11 14:01:10.989871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.989884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.194 [2024-12-11 14:01:10.989893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.989905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.989914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.990134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.990157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.990171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.990180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.990192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.990201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.990213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.990222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.990233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.990244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.990358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.990369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.990380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.194 [2024-12-11 14:01:10.990518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.194 [2024-12-11 14:01:10.990669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.194 [2024-12-11 14:01:10.990693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.990722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.195 [2024-12-11 14:01:10.990975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.990999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.195 [2024-12-11 14:01:10.991109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.991124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.195 [2024-12-11 14:01:10.991135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.991148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.195 [2024-12-11 14:01:10.991158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.991170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.195 [2024-12-11 14:01:10.991180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.991488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.195 [2024-12-11 14:01:10.991579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.991593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.195 [2024-12-11 14:01:10.991604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.991616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.991626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.991637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.991647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.991659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.991799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.992039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.992063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.992077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.992088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.992099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.992109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.992120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.992130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.992142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.992243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.992257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.992268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.992281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.992430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.992550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.992562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.992574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.992586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.992598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.992607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.992620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.992631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.992643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.992655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.992878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.992902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.992916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.992926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.992939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.992949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.992961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.992970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.992981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.993116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.993232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.993245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.993256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.993266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.993394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.993534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.993656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.195 [2024-12-11 14:01:10.993677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.993691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.195 [2024-12-11 14:01:10.993955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.993981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.195 [2024-12-11 14:01:10.993993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.994005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.195 [2024-12-11 14:01:10.994281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.994400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.195 [2024-12-11 14:01:10.994420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.994433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.195 [2024-12-11 14:01:10.994444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.994673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.195 [2024-12-11 14:01:10.994696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.994723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.195 [2024-12-11 14:01:10.994733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.994745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.195 [2024-12-11 14:01:10.994754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.195 [2024-12-11 14:01:10.994766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.196 [2024-12-11 14:01:10.994775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.995003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.196 [2024-12-11 14:01:10.995021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.995037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.196 [2024-12-11 14:01:10.995047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.995059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.196 [2024-12-11 14:01:10.995068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.995080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.196 [2024-12-11 14:01:10.995089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.995111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.196 [2024-12-11 14:01:10.995418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.995436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.196 [2024-12-11 14:01:10.995448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.995460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.196 [2024-12-11 14:01:10.995470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.995481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.196 [2024-12-11 14:01:10.995490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.995593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.196 [2024-12-11 14:01:10.995604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.995616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.196 [2024-12-11 14:01:10.995625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.995757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.196 [2024-12-11 14:01:10.995872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.995893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.196 [2024-12-11 14:01:10.995904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.995916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.196 [2024-12-11 14:01:10.996154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.996173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.196 [2024-12-11 14:01:10.996183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.996195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.196 [2024-12-11 14:01:10.996205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.996473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.196 [2024-12-11 14:01:10.996493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.996507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.196 [2024-12-11 14:01:10.996517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.996529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.196 [2024-12-11 14:01:10.996539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.996551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.196 [2024-12-11 14:01:10.996561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.996572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.196 [2024-12-11 14:01:10.996679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.996712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.196 [2024-12-11 14:01:10.996725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.996737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.196 [2024-12-11 14:01:10.996747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.996759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5f010 is same with the state(6) to be set 00:20:18.196 [2024-12-11 14:01:10.996975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.196 [2024-12-11 14:01:10.996996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.196 [2024-12-11 14:01:10.997007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65112 len:8 PRP1 0x0 PRP2 0x0 00:20:18.196 [2024-12-11 14:01:10.997018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.997030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.196 [2024-12-11 14:01:10.997038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.196 [2024-12-11 14:01:10.997046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65632 len:8 PRP1 0x0 PRP2 0x0 00:20:18.196 [2024-12-11 14:01:10.997056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.997065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.196 [2024-12-11 14:01:10.997074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.196 [2024-12-11 14:01:10.997333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65640 len:8 PRP1 0x0 PRP2 0x0 00:20:18.196 [2024-12-11 14:01:10.997355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.997368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.196 [2024-12-11 14:01:10.997378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.196 [2024-12-11 14:01:10.997386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65648 len:8 PRP1 0x0 PRP2 0x0 00:20:18.196 [2024-12-11 14:01:10.997396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.997405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.196 [2024-12-11 14:01:10.997413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.196 [2024-12-11 14:01:10.997421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65656 len:8 PRP1 0x0 PRP2 0x0 00:20:18.196 [2024-12-11 14:01:10.997430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.997440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.196 [2024-12-11 14:01:10.997517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.196 [2024-12-11 14:01:10.997527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65664 len:8 PRP1 0x0 PRP2 0x0 00:20:18.196 [2024-12-11 14:01:10.997536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.997546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.196 [2024-12-11 14:01:10.997554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.196 [2024-12-11 14:01:10.997563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65672 len:8 PRP1 0x0 PRP2 0x0 00:20:18.196 [2024-12-11 14:01:10.997834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.997851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.196 [2024-12-11 14:01:10.997859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.196 [2024-12-11 14:01:10.997869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65680 len:8 PRP1 0x0 PRP2 0x0 00:20:18.196 [2024-12-11 14:01:10.997996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.998018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.196 [2024-12-11 14:01:10.998155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.196 [2024-12-11 14:01:10.998271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65688 len:8 PRP1 0x0 PRP2 0x0 00:20:18.196 [2024-12-11 14:01:10.998292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.998424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.196 [2024-12-11 14:01:10.998440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.196 [2024-12-11 14:01:10.998449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65696 len:8 PRP1 0x0 PRP2 0x0 00:20:18.196 [2024-12-11 14:01:10.998564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.196 [2024-12-11 14:01:10.998578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.196 [2024-12-11 14:01:10.998588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.197 [2024-12-11 14:01:10.998834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65704 len:8 PRP1 0x0 PRP2 0x0 00:20:18.197 [2024-12-11 14:01:10.998858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.197 [2024-12-11 14:01:10.998870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.197 [2024-12-11 14:01:10.998879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.197 [2024-12-11 14:01:10.998887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65712 len:8 PRP1 0x0 PRP2 0x0 00:20:18.197 [2024-12-11 14:01:10.998897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.197 [2024-12-11 14:01:10.998907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.197 [2024-12-11 14:01:10.998915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.197 [2024-12-11 14:01:10.998923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65720 len:8 PRP1 0x0 PRP2 0x0 00:20:18.197 [2024-12-11 14:01:10.998932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.197 [2024-12-11 14:01:10.999073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.197 [2024-12-11 14:01:10.999168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.197 [2024-12-11 14:01:10.999185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65728 len:8 PRP1 0x0 PRP2 0x0 00:20:18.197 [2024-12-11 14:01:10.999196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.197 [2024-12-11 14:01:10.999207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.197 [2024-12-11 14:01:10.999216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.197 [2024-12-11 14:01:10.999224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65736 len:8 PRP1 0x0 PRP2 0x0 00:20:18.197 [2024-12-11 14:01:10.999233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.197 [2024-12-11 14:01:10.999502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.197 [2024-12-11 14:01:10.999513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.197 [2024-12-11 14:01:10.999639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65744 len:8 PRP1 0x0 PRP2 0x0 00:20:18.197 [2024-12-11 14:01:10.999658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.197 [2024-12-11 14:01:10.999785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.197 [2024-12-11 14:01:10.999804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.197 [2024-12-11 14:01:10.999924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65752 len:8 PRP1 0x0 PRP2 0x0 00:20:18.197 [2024-12-11 14:01:10.999942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.197 [2024-12-11 14:01:10.999954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.197 [2024-12-11 14:01:11.000178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.197 [2024-12-11 14:01:11.000198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65760 len:8 PRP1 0x0 PRP2 0x0 00:20:18.197 [2024-12-11 14:01:11.000210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.197 [2024-12-11 14:01:11.000222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.197 [2024-12-11 14:01:11.000230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.197 [2024-12-11 14:01:11.000239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65768 len:8 PRP1 0x0 PRP2 0x0 00:20:18.197 [2024-12-11 14:01:11.000249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.197 [2024-12-11 14:01:11.000639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.197 [2024-12-11 14:01:11.000658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.197 [2024-12-11 14:01:11.000669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65776 len:8 PRP1 0x0 PRP2 0x0 00:20:18.197 [2024-12-11 14:01:11.000679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.197 [2024-12-11 14:01:11.000690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:18.197 [2024-12-11 14:01:11.000710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:18.197 [2024-12-11 14:01:11.000721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65784 len:8 PRP1 0x0 PRP2 0x0 00:20:18.197 [2024-12-11 14:01:11.000730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.197 [2024-12-11 14:01:11.001061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.197 [2024-12-11 14:01:11.001088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.197 [2024-12-11 14:01:11.001107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.197 [2024-12-11 14:01:11.001116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.197 [2024-12-11 14:01:11.001127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.197 [2024-12-11 14:01:11.001137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.197 [2024-12-11 14:01:11.001147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.197 [2024-12-11 14:01:11.001155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.197 [2024-12-11 14:01:11.001271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0070 is same with the state(6) to be set 00:20:18.197 14:01:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:20:18.197 [2024-12-11 14:01:11.001772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:18.197 [2024-12-11 14:01:11.001804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af0070 (9): Bad file descriptor 00:20:18.197 [2024-12-11 14:01:11.002114] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.197 [2024-12-11 14:01:11.002146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af0070 with addr=10.0.0.3, port=4420 00:20:18.197 [2024-12-11 14:01:11.002159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0070 is same with the state(6) to be set 00:20:18.197 [2024-12-11 14:01:11.002441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af0070 (9): Bad file descriptor 00:20:18.197 [2024-12-11 14:01:11.002542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:18.197 [2024-12-11 14:01:11.002555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:18.197 [2024-12-11 14:01:11.002567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:18.197 [2024-12-11 14:01:11.002580] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:18.197 [2024-12-11 14:01:11.002592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:19.141 4048.00 IOPS, 15.81 MiB/s [2024-12-11T14:01:12.188Z] [2024-12-11 14:01:12.002772] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.141 [2024-12-11 14:01:12.002852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af0070 with addr=10.0.0.3, port=4420 00:20:19.141 [2024-12-11 14:01:12.002869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0070 is same with the state(6) to be set 00:20:19.141 [2024-12-11 14:01:12.002898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af0070 (9): Bad file descriptor 00:20:19.141 [2024-12-11 14:01:12.002919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:19.141 [2024-12-11 14:01:12.002932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:19.141 [2024-12-11 14:01:12.002945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:19.141 [2024-12-11 14:01:12.002958] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:19.141 [2024-12-11 14:01:12.002970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:20.088 2698.67 IOPS, 10.54 MiB/s [2024-12-11T14:01:13.135Z] [2024-12-11 14:01:13.003151] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.088 [2024-12-11 14:01:13.003231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af0070 with addr=10.0.0.3, port=4420 00:20:20.088 [2024-12-11 14:01:13.003248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0070 is same with the state(6) to be set 00:20:20.088 [2024-12-11 14:01:13.003278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af0070 (9): Bad file descriptor 00:20:20.088 [2024-12-11 14:01:13.003309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:20.088 [2024-12-11 14:01:13.003320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:20.088 [2024-12-11 14:01:13.003333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:20.088 [2024-12-11 14:01:13.003346] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:20.088 [2024-12-11 14:01:13.003360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:21.022 2024.00 IOPS, 7.91 MiB/s [2024-12-11T14:01:14.069Z] 14:01:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:21.022 [2024-12-11 14:01:14.006966] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.022 [2024-12-11 14:01:14.007053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af0070 with addr=10.0.0.3, port=4420 00:20:21.022 [2024-12-11 14:01:14.007072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0070 is same with the state(6) to be set 00:20:21.022 [2024-12-11 14:01:14.007637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af0070 (9): Bad file descriptor 00:20:21.022 [2024-12-11 14:01:14.008089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:21.022 [2024-12-11 14:01:14.008117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:21.022 [2024-12-11 14:01:14.008148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:21.022 [2024-12-11 14:01:14.008162] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:21.022 [2024-12-11 14:01:14.008175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:21.281 [2024-12-11 14:01:14.270123] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:21.281 14:01:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 83628 00:20:22.105 1619.20 IOPS, 6.33 MiB/s [2024-12-11T14:01:15.152Z] [2024-12-11 14:01:15.039249] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:20:23.973 2613.17 IOPS, 10.21 MiB/s [2024-12-11T14:01:17.954Z] 3595.29 IOPS, 14.04 MiB/s [2024-12-11T14:01:18.928Z] 4370.88 IOPS, 17.07 MiB/s [2024-12-11T14:01:19.863Z] 4921.67 IOPS, 19.23 MiB/s [2024-12-11T14:01:19.863Z] 5367.10 IOPS, 20.97 MiB/s 00:20:26.816 Latency(us) 00:20:26.816 [2024-12-11T14:01:19.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.816 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:26.816 Verification LBA range: start 0x0 length 0x4000 00:20:26.816 NVMe0n1 : 10.01 5373.64 20.99 3623.06 0.00 14201.96 882.50 3035150.89 00:20:26.816 [2024-12-11T14:01:19.863Z] =================================================================================================================== 00:20:26.816 [2024-12-11T14:01:19.863Z] Total : 5373.64 20.99 3623.06 0.00 14201.96 0.00 3035150.89 00:20:26.816 { 00:20:26.816 "results": [ 00:20:26.816 { 00:20:26.816 "job": "NVMe0n1", 00:20:26.816 "core_mask": "0x4", 00:20:26.816 "workload": "verify", 00:20:26.816 "status": "finished", 00:20:26.816 "verify_range": { 00:20:26.816 "start": 0, 00:20:26.816 "length": 16384 00:20:26.816 }, 00:20:26.816 "queue_depth": 128, 00:20:26.816 "io_size": 4096, 00:20:26.816 "runtime": 10.008673, 00:20:26.816 "iops": 5373.6394425115095, 00:20:26.816 "mibps": 20.990779072310584, 00:20:26.816 "io_failed": 36262, 00:20:26.816 "io_timeout": 0, 00:20:26.816 "avg_latency_us": 14201.960938561022, 00:20:26.816 "min_latency_us": 882.5018181818182, 00:20:26.816 "max_latency_us": 3035150.8945454545 00:20:26.816 } 00:20:26.816 ], 00:20:26.816 "core_count": 1 00:20:26.816 } 00:20:27.075 14:01:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 83500 00:20:27.075 14:01:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 83500 ']' 00:20:27.075 14:01:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 83500 00:20:27.075 14:01:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:20:27.075 14:01:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.075 14:01:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83500 00:20:27.075 killing process with pid 83500 00:20:27.075 Received shutdown signal, test time was about 10.000000 seconds 00:20:27.075 00:20:27.075 Latency(us) 00:20:27.075 [2024-12-11T14:01:20.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.075 [2024-12-11T14:01:20.122Z] =================================================================================================================== 00:20:27.075 [2024-12-11T14:01:20.122Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.075 14:01:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:27.075 14:01:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:27.075 14:01:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83500' 00:20:27.075 14:01:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 83500 00:20:27.075 14:01:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 83500 00:20:27.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.334 14:01:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=83743 00:20:27.334 14:01:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:20:27.334 14:01:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 83743 /var/tmp/bdevperf.sock 00:20:27.334 14:01:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 83743 ']' 00:20:27.334 14:01:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.334 14:01:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.334 14:01:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.334 14:01:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.334 14:01:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:27.334 [2024-12-11 14:01:20.230078] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:20:27.334 [2024-12-11 14:01:20.230368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83743 ] 00:20:27.334 [2024-12-11 14:01:20.373547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.592 [2024-12-11 14:01:20.445691] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.592 [2024-12-11 14:01:20.501183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:27.592 14:01:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.592 14:01:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:27.592 14:01:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=83750 00:20:27.592 14:01:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83743 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:20:27.592 14:01:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:20:27.851 14:01:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:28.418 NVMe0n1 00:20:28.418 14:01:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=83793 00:20:28.418 14:01:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:28.418 14:01:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:20:28.418 Running I/O for 10 seconds... 00:20:29.363 14:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:29.624 14732.00 IOPS, 57.55 MiB/s [2024-12-11T14:01:22.671Z] [2024-12-11 14:01:22.485333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.624 [2024-12-11 14:01:22.485388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.624 [2024-12-11 14:01:22.485401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.624 [2024-12-11 14:01:22.485410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.485996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.486004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.486012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.486020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.486028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.486036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.486044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.486052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.486059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.486067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.486075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.486082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.486090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.486098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.486105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.486113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.486120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.486139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.486149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.625 [2024-12-11 14:01:22.486157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637b50 is same with the state(6) to be set 00:20:29.626 [2024-12-11 14:01:22.486569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.486600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.486923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.486952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.486966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.486976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.486988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.486998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.487009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.487018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.487029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.487038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.487049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.487059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.487070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.487079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.487090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.487111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.487135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.487145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.487155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.487164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.487176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.487186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.487198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.487207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.487218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.487227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.487240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:33680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.487249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.487260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.487270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.487281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.487291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.487303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.487312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.487323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.487333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.487346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.487356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.487367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.487377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.487388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:51680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.487398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.487409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.626 [2024-12-11 14:01:22.487418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.626 [2024-12-11 14:01:22.487430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.487986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.487997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.488013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.488025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.488045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.488056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.488065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.488077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.488087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.488100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.488109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.488120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.488140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.488151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.488161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.488172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.488181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.488199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.488209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.488220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.488229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.488241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.488250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.488261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.488270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.488282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.488291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.488302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:43360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.488311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.488322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.488332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.627 [2024-12-11 14:01:22.488343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:115688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.627 [2024-12-11 14:01:22.488352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.488981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.488990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.489001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.489010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.489025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.489034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.489046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.489055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.489066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.489075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.489086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.489096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.489106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.489115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.489127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.489146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.489157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.489824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.489841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.489861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.489873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.489882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.489899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.489909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.489920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.489930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.628 [2024-12-11 14:01:22.489941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.628 [2024-12-11 14:01:22.489950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.489961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.489970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.489981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.489990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.490010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.490030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.490050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.490071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.490092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:115672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.490113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.490144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.490165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.490185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.490205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.490225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.490250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.490270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.490291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.490311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.490332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.490352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.490373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.490393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.629 [2024-12-11 14:01:22.490414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7100 is same with the state(6) to be set 00:20:29.629 [2024-12-11 14:01:22.490442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:29.629 [2024-12-11 14:01:22.490450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:29.629 [2024-12-11 14:01:22.490459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102168 len:8 PRP1 0x0 PRP2 0x0 00:20:29.629 [2024-12-11 14:01:22.490468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.629 [2024-12-11 14:01:22.490648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.629 [2024-12-11 14:01:22.490669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.490679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.629 [2024-12-11 14:01:22.490688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.491150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.629 [2024-12-11 14:01:22.491532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.629 [2024-12-11 14:01:22.491907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe49070 is same with the state(6) to be set 00:20:29.629 [2024-12-11 14:01:22.492684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:29.629 [2024-12-11 14:01:22.493155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe49070 (9): Bad file descriptor 00:20:29.629 [2024-12-11 14:01:22.493548] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.629 [2024-12-11 14:01:22.493575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe49070 with addr=10.0.0.3, port=4420 00:20:29.629 [2024-12-11 14:01:22.493587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe49070 is same with the state(6) to be set 00:20:29.629 [2024-12-11 14:01:22.493618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe49070 (9): Bad file descriptor 00:20:29.629 [2024-12-11 14:01:22.493635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:29.629 [2024-12-11 14:01:22.493645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:29.629 [2024-12-11 14:01:22.493656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:29.629 [2024-12-11 14:01:22.493667] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:29.629 [2024-12-11 14:01:22.493684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:29.629 14:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 83793 00:20:31.526 8700.50 IOPS, 33.99 MiB/s [2024-12-11T14:01:24.573Z] 5800.33 IOPS, 22.66 MiB/s [2024-12-11T14:01:24.573Z] [2024-12-11 14:01:24.503998] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:31.527 [2024-12-11 14:01:24.504075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe49070 with addr=10.0.0.3, port=4420 00:20:31.527 [2024-12-11 14:01:24.504092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe49070 is same with the state(6) to be set 00:20:31.527 [2024-12-11 14:01:24.504118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe49070 (9): Bad file descriptor 00:20:31.527 [2024-12-11 14:01:24.504149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:31.527 [2024-12-11 14:01:24.504160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:31.527 [2024-12-11 14:01:24.504180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:31.527 [2024-12-11 14:01:24.504192] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:31.527 [2024-12-11 14:01:24.504204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:33.397 4350.25 IOPS, 16.99 MiB/s [2024-12-11T14:01:26.702Z] 3480.20 IOPS, 13.59 MiB/s [2024-12-11T14:01:26.702Z] [2024-12-11 14:01:26.504751] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:33.655 [2024-12-11 14:01:26.504809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe49070 with addr=10.0.0.3, port=4420 00:20:33.655 [2024-12-11 14:01:26.504826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe49070 is same with the state(6) to be set 00:20:33.655 [2024-12-11 14:01:26.504850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe49070 (9): Bad file descriptor 00:20:33.655 [2024-12-11 14:01:26.504870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:33.655 [2024-12-11 14:01:26.504879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:33.655 [2024-12-11 14:01:26.504890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:33.655 [2024-12-11 14:01:26.504902] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:33.655 [2024-12-11 14:01:26.504913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:35.526 2900.17 IOPS, 11.33 MiB/s [2024-12-11T14:01:28.573Z] 2485.86 IOPS, 9.71 MiB/s [2024-12-11T14:01:28.573Z] [2024-12-11 14:01:28.505372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:35.526 [2024-12-11 14:01:28.505423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:35.526 [2024-12-11 14:01:28.505452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:35.526 [2024-12-11 14:01:28.505463] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:20:35.526 [2024-12-11 14:01:28.505475] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:36.461 2175.12 IOPS, 8.50 MiB/s 00:20:36.461 Latency(us) 00:20:36.461 [2024-12-11T14:01:29.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.461 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:36.461 NVMe0n1 : 8.19 2125.11 8.30 15.63 0.00 59886.85 8281.37 7046430.72 00:20:36.461 [2024-12-11T14:01:29.508Z] =================================================================================================================== 00:20:36.461 [2024-12-11T14:01:29.508Z] Total : 2125.11 8.30 15.63 0.00 59886.85 8281.37 7046430.72 00:20:36.461 { 00:20:36.461 "results": [ 00:20:36.461 { 00:20:36.461 "job": "NVMe0n1", 00:20:36.461 "core_mask": "0x4", 00:20:36.461 "workload": "randread", 00:20:36.461 "status": "finished", 00:20:36.461 "queue_depth": 128, 00:20:36.461 "io_size": 4096, 00:20:36.461 "runtime": 8.188284, 00:20:36.461 "iops": 2125.1094856993236, 00:20:36.461 "mibps": 8.301208928512983, 00:20:36.461 "io_failed": 128, 00:20:36.461 "io_timeout": 0, 00:20:36.461 "avg_latency_us": 59886.85383245427, 00:20:36.461 "min_latency_us": 8281.367272727273, 00:20:36.461 "max_latency_us": 7046430.72 00:20:36.461 } 00:20:36.461 ], 00:20:36.461 "core_count": 1 00:20:36.461 } 00:20:36.720 14:01:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:36.720 Attaching 5 probes... 00:20:36.720 1345.836688: reset bdev controller NVMe0 00:20:36.720 1346.545433: reconnect bdev controller NVMe0 00:20:36.720 3357.038381: reconnect delay bdev controller NVMe0 00:20:36.720 3357.061059: reconnect bdev controller NVMe0 00:20:36.720 5357.790530: reconnect delay bdev controller NVMe0 00:20:36.720 5357.813993: reconnect bdev controller NVMe0 00:20:36.720 7358.505928: reconnect delay bdev controller NVMe0 00:20:36.720 7358.529025: reconnect bdev controller NVMe0 00:20:36.720 14:01:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:36.720 14:01:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:36.720 14:01:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 83750 00:20:36.720 14:01:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:36.720 14:01:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 83743 00:20:36.720 14:01:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 83743 ']' 00:20:36.720 14:01:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 83743 00:20:36.720 14:01:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:20:36.720 14:01:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:36.720 14:01:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83743 00:20:36.720 killing process with pid 83743 00:20:36.720 Received shutdown signal, test time was about 8.251349 seconds 00:20:36.720 00:20:36.720 Latency(us) 00:20:36.720 [2024-12-11T14:01:29.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.721 [2024-12-11T14:01:29.768Z] =================================================================================================================== 00:20:36.721 [2024-12-11T14:01:29.768Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:36.721 14:01:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:36.721 14:01:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:36.721 14:01:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83743' 00:20:36.721 14:01:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 83743 00:20:36.721 14:01:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 83743 00:20:36.721 14:01:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:37.288 rmmod nvme_tcp 00:20:37.288 rmmod nvme_fabrics 00:20:37.288 rmmod nvme_keyring 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 83305 ']' 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 83305 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 83305 ']' 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 83305 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83305 00:20:37.288 killing process with pid 83305 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83305' 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 83305 00:20:37.288 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 83305 00:20:37.555 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:37.555 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:37.555 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:37.556 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:20:37.556 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:20:37.556 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:37.556 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:20:37.556 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:37.556 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:37.556 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:37.556 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:37.556 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:37.556 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:37.556 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:37.556 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:37.556 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:37.556 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:37.556 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:37.556 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:37.832 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:37.832 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:37.832 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:37.832 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:37.832 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.832 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.832 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.832 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:20:37.832 00:20:37.832 real 0m47.165s 00:20:37.832 user 2m18.146s 00:20:37.832 sys 0m5.944s 00:20:37.832 ************************************ 00:20:37.832 END TEST nvmf_timeout 00:20:37.832 ************************************ 00:20:37.832 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:37.832 14:01:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:37.832 14:01:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:20:37.832 14:01:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:37.832 ************************************ 00:20:37.832 END TEST nvmf_host 00:20:37.832 ************************************ 00:20:37.832 00:20:37.832 real 5m12.015s 00:20:37.832 user 13m33.210s 00:20:37.832 sys 1m10.779s 00:20:37.832 14:01:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:37.832 14:01:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.832 14:01:30 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:20:37.832 14:01:30 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:20:37.832 ************************************ 00:20:37.832 END TEST nvmf_tcp 00:20:37.832 ************************************ 00:20:37.832 00:20:37.832 real 12m57.988s 00:20:37.832 user 31m8.181s 00:20:37.832 sys 3m12.051s 00:20:37.832 14:01:30 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:37.832 14:01:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:37.832 14:01:30 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:20:37.832 14:01:30 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:37.832 14:01:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:37.832 14:01:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:37.832 14:01:30 -- common/autotest_common.sh@10 -- # set +x 00:20:37.832 ************************************ 00:20:37.832 START TEST nvmf_dif 00:20:37.832 ************************************ 00:20:37.832 14:01:30 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:38.092 * Looking for test storage... 00:20:38.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:38.092 14:01:30 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:38.092 14:01:30 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:20:38.092 14:01:30 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:38.092 14:01:31 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:20:38.092 14:01:31 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:38.092 14:01:31 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:38.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.092 --rc genhtml_branch_coverage=1 00:20:38.092 --rc genhtml_function_coverage=1 00:20:38.092 --rc genhtml_legend=1 00:20:38.092 --rc geninfo_all_blocks=1 00:20:38.092 --rc geninfo_unexecuted_blocks=1 00:20:38.092 00:20:38.092 ' 00:20:38.092 14:01:31 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:38.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.092 --rc genhtml_branch_coverage=1 00:20:38.092 --rc genhtml_function_coverage=1 00:20:38.092 --rc genhtml_legend=1 00:20:38.092 --rc geninfo_all_blocks=1 00:20:38.092 --rc geninfo_unexecuted_blocks=1 00:20:38.092 00:20:38.092 ' 00:20:38.092 14:01:31 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:38.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.092 --rc genhtml_branch_coverage=1 00:20:38.092 --rc genhtml_function_coverage=1 00:20:38.092 --rc genhtml_legend=1 00:20:38.092 --rc geninfo_all_blocks=1 00:20:38.092 --rc geninfo_unexecuted_blocks=1 00:20:38.092 00:20:38.092 ' 00:20:38.092 14:01:31 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:38.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.092 --rc genhtml_branch_coverage=1 00:20:38.092 --rc genhtml_function_coverage=1 00:20:38.092 --rc genhtml_legend=1 00:20:38.092 --rc geninfo_all_blocks=1 00:20:38.092 --rc geninfo_unexecuted_blocks=1 00:20:38.092 00:20:38.092 ' 00:20:38.092 14:01:31 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.092 14:01:31 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.092 14:01:31 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.092 14:01:31 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.092 14:01:31 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.092 14:01:31 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:38.092 14:01:31 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:38.092 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:38.092 14:01:31 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:38.092 14:01:31 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:38.092 14:01:31 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:38.092 14:01:31 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:38.092 14:01:31 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.092 14:01:31 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:38.092 14:01:31 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:38.092 14:01:31 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.093 14:01:31 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:38.093 14:01:31 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:38.093 14:01:31 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:38.093 14:01:31 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:38.093 14:01:31 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:38.093 14:01:31 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:38.093 14:01:31 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.093 14:01:31 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:38.093 14:01:31 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:38.093 14:01:31 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:38.093 14:01:31 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:38.093 14:01:31 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:38.093 Cannot find device "nvmf_init_br" 00:20:38.093 14:01:31 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:38.093 14:01:31 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:38.093 Cannot find device "nvmf_init_br2" 00:20:38.093 14:01:31 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:38.093 14:01:31 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:38.093 Cannot find device "nvmf_tgt_br" 00:20:38.093 14:01:31 nvmf_dif -- nvmf/common.sh@164 -- # true 00:20:38.093 14:01:31 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:38.352 Cannot find device "nvmf_tgt_br2" 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@165 -- # true 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:38.352 Cannot find device "nvmf_init_br" 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@166 -- # true 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:38.352 Cannot find device "nvmf_init_br2" 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@167 -- # true 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:38.352 Cannot find device "nvmf_tgt_br" 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@168 -- # true 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:38.352 Cannot find device "nvmf_tgt_br2" 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@169 -- # true 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:38.352 Cannot find device "nvmf_br" 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@170 -- # true 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:38.352 Cannot find device "nvmf_init_if" 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@171 -- # true 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:38.352 Cannot find device "nvmf_init_if2" 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@172 -- # true 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:38.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@173 -- # true 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:38.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@174 -- # true 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:38.352 14:01:31 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:38.611 14:01:31 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:38.611 14:01:31 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:38.611 14:01:31 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:38.611 14:01:31 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:38.611 14:01:31 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:38.611 14:01:31 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:38.611 14:01:31 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:38.611 14:01:31 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:38.611 14:01:31 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:38.611 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:38.611 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:20:38.611 00:20:38.611 --- 10.0.0.3 ping statistics --- 00:20:38.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.611 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:20:38.612 14:01:31 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:38.612 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:38.612 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:20:38.612 00:20:38.612 --- 10.0.0.4 ping statistics --- 00:20:38.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.612 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:38.612 14:01:31 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:38.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:38.612 00:20:38.612 --- 10.0.0.1 ping statistics --- 00:20:38.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.612 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:38.612 14:01:31 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:38.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:20:38.612 00:20:38.612 --- 10.0.0.2 ping statistics --- 00:20:38.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.612 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:38.612 14:01:31 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.612 14:01:31 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:20:38.612 14:01:31 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:20:38.612 14:01:31 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:38.870 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:38.870 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:38.870 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:38.870 14:01:31 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.870 14:01:31 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:38.870 14:01:31 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:38.870 14:01:31 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.870 14:01:31 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:38.870 14:01:31 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:38.870 14:01:31 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:38.870 14:01:31 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:38.870 14:01:31 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:38.870 14:01:31 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.870 14:01:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:38.870 14:01:31 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=84289 00:20:38.870 14:01:31 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:38.870 14:01:31 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 84289 00:20:38.870 14:01:31 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 84289 ']' 00:20:38.870 14:01:31 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.870 14:01:31 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.870 14:01:31 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.870 14:01:31 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.870 14:01:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:39.129 [2024-12-11 14:01:31.966888] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:20:39.129 [2024-12-11 14:01:31.967554] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.129 [2024-12-11 14:01:32.124511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.388 [2024-12-11 14:01:32.184427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.388 [2024-12-11 14:01:32.184881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.388 [2024-12-11 14:01:32.184907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.388 [2024-12-11 14:01:32.184919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.388 [2024-12-11 14:01:32.184930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.388 [2024-12-11 14:01:32.185380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.388 [2024-12-11 14:01:32.247909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:39.388 14:01:32 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.388 14:01:32 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:20:39.388 14:01:32 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:39.388 14:01:32 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:39.388 14:01:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:39.388 14:01:32 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.388 14:01:32 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:39.388 14:01:32 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:39.388 14:01:32 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.388 14:01:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:39.388 [2024-12-11 14:01:32.371610] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.388 14:01:32 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.388 14:01:32 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:39.388 14:01:32 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:39.388 14:01:32 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:39.388 14:01:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:39.388 ************************************ 00:20:39.388 START TEST fio_dif_1_default 00:20:39.388 ************************************ 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:39.388 bdev_null0 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:39.388 [2024-12-11 14:01:32.419819] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.388 14:01:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:39.389 { 00:20:39.389 "params": { 00:20:39.389 "name": "Nvme$subsystem", 00:20:39.389 "trtype": "$TEST_TRANSPORT", 00:20:39.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.389 "adrfam": "ipv4", 00:20:39.389 "trsvcid": "$NVMF_PORT", 00:20:39.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.389 "hdgst": ${hdgst:-false}, 00:20:39.389 "ddgst": ${ddgst:-false} 00:20:39.389 }, 00:20:39.389 "method": "bdev_nvme_attach_controller" 00:20:39.389 } 00:20:39.389 EOF 00:20:39.389 )") 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:39.389 14:01:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:20:39.648 14:01:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:20:39.648 14:01:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:39.648 "params": { 00:20:39.648 "name": "Nvme0", 00:20:39.648 "trtype": "tcp", 00:20:39.648 "traddr": "10.0.0.3", 00:20:39.648 "adrfam": "ipv4", 00:20:39.648 "trsvcid": "4420", 00:20:39.648 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:39.648 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:39.648 "hdgst": false, 00:20:39.648 "ddgst": false 00:20:39.648 }, 00:20:39.648 "method": "bdev_nvme_attach_controller" 00:20:39.648 }' 00:20:39.648 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:39.648 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:39.648 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:39.648 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:39.648 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:39.648 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:39.648 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:39.648 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:39.648 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:39.648 14:01:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:39.648 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:39.648 fio-3.35 00:20:39.648 Starting 1 thread 00:20:51.854 00:20:51.854 filename0: (groupid=0, jobs=1): err= 0: pid=84348: Wed Dec 11 14:01:43 2024 00:20:51.854 read: IOPS=8273, BW=32.3MiB/s (33.9MB/s)(323MiB/10001msec) 00:20:51.854 slat (nsec): min=6320, max=89786, avg=9099.86, stdev=3264.32 00:20:51.854 clat (usec): min=348, max=2224, avg=456.40, stdev=47.75 00:20:51.854 lat (usec): min=355, max=2246, avg=465.50, stdev=48.28 00:20:51.854 clat percentiles (usec): 00:20:51.854 | 1.00th=[ 396], 5.00th=[ 416], 10.00th=[ 424], 20.00th=[ 433], 00:20:51.854 | 30.00th=[ 441], 40.00th=[ 445], 50.00th=[ 449], 60.00th=[ 457], 00:20:51.854 | 70.00th=[ 465], 80.00th=[ 474], 90.00th=[ 494], 95.00th=[ 510], 00:20:51.854 | 99.00th=[ 570], 99.50th=[ 725], 99.90th=[ 1020], 99.95th=[ 1074], 00:20:51.854 | 99.99th=[ 1876] 00:20:51.854 bw ( KiB/s): min=29184, max=34592, per=100.00%, avg=33114.95, stdev=1199.34, samples=19 00:20:51.854 iops : min= 7296, max= 8648, avg=8278.74, stdev=299.83, samples=19 00:20:51.854 lat (usec) : 500=92.87%, 750=6.74%, 1000=0.25% 00:20:51.854 lat (msec) : 2=0.13%, 4=0.01% 00:20:51.854 cpu : usr=84.66%, sys=13.40%, ctx=44, majf=0, minf=9 00:20:51.854 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:51.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.854 issued rwts: total=82744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.854 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:51.854 00:20:51.854 Run status group 0 (all jobs): 00:20:51.854 READ: bw=32.3MiB/s (33.9MB/s), 32.3MiB/s-32.3MiB/s (33.9MB/s-33.9MB/s), io=323MiB (339MB), run=10001-10001msec 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:51.854 ************************************ 00:20:51.854 END TEST fio_dif_1_default 00:20:51.854 ************************************ 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.854 00:20:51.854 real 0m11.100s 00:20:51.854 user 0m9.176s 00:20:51.854 sys 0m1.641s 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:51.854 14:01:43 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:51.854 14:01:43 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:51.854 14:01:43 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.854 14:01:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:51.854 ************************************ 00:20:51.854 START TEST fio_dif_1_multi_subsystems 00:20:51.854 ************************************ 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:51.854 bdev_null0 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:51.854 [2024-12-11 14:01:43.573641] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:51.854 bdev_null1 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:51.854 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.854 { 00:20:51.854 "params": { 00:20:51.854 "name": "Nvme$subsystem", 00:20:51.854 "trtype": "$TEST_TRANSPORT", 00:20:51.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.854 "adrfam": "ipv4", 00:20:51.854 "trsvcid": "$NVMF_PORT", 00:20:51.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.854 "hdgst": ${hdgst:-false}, 00:20:51.854 "ddgst": ${ddgst:-false} 00:20:51.854 }, 00:20:51.855 "method": "bdev_nvme_attach_controller" 00:20:51.855 } 00:20:51.855 EOF 00:20:51.855 )") 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.855 { 00:20:51.855 "params": { 00:20:51.855 "name": "Nvme$subsystem", 00:20:51.855 "trtype": "$TEST_TRANSPORT", 00:20:51.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.855 "adrfam": "ipv4", 00:20:51.855 "trsvcid": "$NVMF_PORT", 00:20:51.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.855 "hdgst": ${hdgst:-false}, 00:20:51.855 "ddgst": ${ddgst:-false} 00:20:51.855 }, 00:20:51.855 "method": "bdev_nvme_attach_controller" 00:20:51.855 } 00:20:51.855 EOF 00:20:51.855 )") 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:51.855 "params": { 00:20:51.855 "name": "Nvme0", 00:20:51.855 "trtype": "tcp", 00:20:51.855 "traddr": "10.0.0.3", 00:20:51.855 "adrfam": "ipv4", 00:20:51.855 "trsvcid": "4420", 00:20:51.855 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:51.855 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:51.855 "hdgst": false, 00:20:51.855 "ddgst": false 00:20:51.855 }, 00:20:51.855 "method": "bdev_nvme_attach_controller" 00:20:51.855 },{ 00:20:51.855 "params": { 00:20:51.855 "name": "Nvme1", 00:20:51.855 "trtype": "tcp", 00:20:51.855 "traddr": "10.0.0.3", 00:20:51.855 "adrfam": "ipv4", 00:20:51.855 "trsvcid": "4420", 00:20:51.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:51.855 "hdgst": false, 00:20:51.855 "ddgst": false 00:20:51.855 }, 00:20:51.855 "method": "bdev_nvme_attach_controller" 00:20:51.855 }' 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:51.855 14:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:51.855 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:51.855 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:51.855 fio-3.35 00:20:51.855 Starting 2 threads 00:21:01.854 00:21:01.854 filename0: (groupid=0, jobs=1): err= 0: pid=84509: Wed Dec 11 14:01:54 2024 00:21:01.854 read: IOPS=4479, BW=17.5MiB/s (18.3MB/s)(175MiB/10001msec) 00:21:01.854 slat (nsec): min=6381, max=95173, avg=15346.15, stdev=7134.45 00:21:01.854 clat (usec): min=651, max=2769, avg=850.78, stdev=73.08 00:21:01.854 lat (usec): min=661, max=2796, avg=866.13, stdev=75.33 00:21:01.854 clat percentiles (usec): 00:21:01.854 | 1.00th=[ 717], 5.00th=[ 750], 10.00th=[ 775], 20.00th=[ 799], 00:21:01.854 | 30.00th=[ 816], 40.00th=[ 832], 50.00th=[ 840], 60.00th=[ 857], 00:21:01.854 | 70.00th=[ 873], 80.00th=[ 898], 90.00th=[ 938], 95.00th=[ 979], 00:21:01.854 | 99.00th=[ 1057], 99.50th=[ 1090], 99.90th=[ 1401], 99.95th=[ 1467], 00:21:01.854 | 99.99th=[ 2008] 00:21:01.854 bw ( KiB/s): min=16064, max=19264, per=50.28%, avg=18019.11, stdev=816.01, samples=19 00:21:01.854 iops : min= 4016, max= 4816, avg=4504.74, stdev=204.00, samples=19 00:21:01.854 lat (usec) : 750=4.52%, 1000=92.00% 00:21:01.854 lat (msec) : 2=3.46%, 4=0.01% 00:21:01.854 cpu : usr=90.16%, sys=8.46%, ctx=25, majf=0, minf=0 00:21:01.854 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.854 issued rwts: total=44800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.854 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:01.854 filename1: (groupid=0, jobs=1): err= 0: pid=84510: Wed Dec 11 14:01:54 2024 00:21:01.854 read: IOPS=4479, BW=17.5MiB/s (18.3MB/s)(175MiB/10001msec) 00:21:01.854 slat (usec): min=6, max=100, avg=15.30, stdev= 7.00 00:21:01.854 clat (usec): min=620, max=2686, avg=850.91, stdev=79.52 00:21:01.854 lat (usec): min=630, max=2702, avg=866.22, stdev=82.12 00:21:01.854 clat percentiles (usec): 00:21:01.854 | 1.00th=[ 693], 5.00th=[ 734], 10.00th=[ 758], 20.00th=[ 791], 00:21:01.854 | 30.00th=[ 816], 40.00th=[ 832], 50.00th=[ 848], 60.00th=[ 865], 00:21:01.854 | 70.00th=[ 881], 80.00th=[ 906], 90.00th=[ 947], 95.00th=[ 988], 00:21:01.854 | 99.00th=[ 1057], 99.50th=[ 1106], 99.90th=[ 1385], 99.95th=[ 1467], 00:21:01.854 | 99.99th=[ 1991] 00:21:01.854 bw ( KiB/s): min=16064, max=19264, per=50.28%, avg=18019.11, stdev=815.11, samples=19 00:21:01.854 iops : min= 4016, max= 4816, avg=4504.74, stdev=203.77, samples=19 00:21:01.854 lat (usec) : 750=7.77%, 1000=88.29% 00:21:01.854 lat (msec) : 2=3.93%, 4=0.01% 00:21:01.854 cpu : usr=88.94%, sys=9.51%, ctx=72, majf=0, minf=0 00:21:01.854 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.854 issued rwts: total=44800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.854 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:01.854 00:21:01.854 Run status group 0 (all jobs): 00:21:01.854 READ: bw=35.0MiB/s (36.7MB/s), 17.5MiB/s-17.5MiB/s (18.3MB/s-18.3MB/s), io=350MiB (367MB), run=10001-10001msec 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:01.854 ************************************ 00:21:01.854 END TEST fio_dif_1_multi_subsystems 00:21:01.854 ************************************ 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.854 00:21:01.854 real 0m11.250s 00:21:01.854 user 0m18.716s 00:21:01.854 sys 0m2.113s 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:01.854 14:01:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:01.854 14:01:54 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:21:01.854 14:01:54 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:01.854 14:01:54 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:01.854 14:01:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:01.854 ************************************ 00:21:01.854 START TEST fio_dif_rand_params 00:21:01.854 ************************************ 00:21:01.854 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:21:01.854 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:21:01.854 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:21:01.854 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:21:01.854 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:21:01.854 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:21:01.854 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:21:01.854 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:21:01.854 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:21:01.855 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:01.855 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:01.855 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:01.855 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:01.855 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:01.855 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.855 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:01.855 bdev_null0 00:21:01.855 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.855 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:01.855 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.855 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:01.855 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.855 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:01.855 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.855 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:01.855 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.855 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:01.855 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.855 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:01.855 [2024-12-11 14:01:54.895158] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:01.855 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:02.115 { 00:21:02.115 "params": { 00:21:02.115 "name": "Nvme$subsystem", 00:21:02.115 "trtype": "$TEST_TRANSPORT", 00:21:02.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.115 "adrfam": "ipv4", 00:21:02.115 "trsvcid": "$NVMF_PORT", 00:21:02.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.115 "hdgst": ${hdgst:-false}, 00:21:02.115 "ddgst": ${ddgst:-false} 00:21:02.115 }, 00:21:02.115 "method": "bdev_nvme_attach_controller" 00:21:02.115 } 00:21:02.115 EOF 00:21:02.115 )") 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:02.115 "params": { 00:21:02.115 "name": "Nvme0", 00:21:02.115 "trtype": "tcp", 00:21:02.115 "traddr": "10.0.0.3", 00:21:02.115 "adrfam": "ipv4", 00:21:02.115 "trsvcid": "4420", 00:21:02.115 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:02.115 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:02.115 "hdgst": false, 00:21:02.115 "ddgst": false 00:21:02.115 }, 00:21:02.115 "method": "bdev_nvme_attach_controller" 00:21:02.115 }' 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:02.115 14:01:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:02.115 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:02.115 ... 00:21:02.115 fio-3.35 00:21:02.115 Starting 3 threads 00:21:08.687 00:21:08.687 filename0: (groupid=0, jobs=1): err= 0: pid=84666: Wed Dec 11 14:02:00 2024 00:21:08.687 read: IOPS=236, BW=29.5MiB/s (30.9MB/s)(148MiB/5006msec) 00:21:08.687 slat (nsec): min=7766, max=47758, avg=15813.23, stdev=4557.64 00:21:08.687 clat (usec): min=8458, max=20153, avg=12667.96, stdev=1046.18 00:21:08.687 lat (usec): min=8473, max=20172, avg=12683.77, stdev=1045.92 00:21:08.687 clat percentiles (usec): 00:21:08.687 | 1.00th=[11338], 5.00th=[11994], 10.00th=[11994], 20.00th=[12256], 00:21:08.687 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:21:08.687 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13042], 95.00th=[15795], 00:21:08.687 | 99.00th=[16057], 99.50th=[20055], 99.90th=[20055], 99.95th=[20055], 00:21:08.687 | 99.99th=[20055] 00:21:08.687 bw ( KiB/s): min=26880, max=31488, per=33.29%, avg=30182.40, stdev=1494.92, samples=10 00:21:08.687 iops : min= 210, max= 246, avg=235.80, stdev=11.68, samples=10 00:21:08.687 lat (msec) : 10=0.25%, 20=99.49%, 50=0.25% 00:21:08.687 cpu : usr=91.17%, sys=8.23%, ctx=11, majf=0, minf=0 00:21:08.687 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:08.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.687 issued rwts: total=1182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:08.687 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:08.687 filename0: (groupid=0, jobs=1): err= 0: pid=84667: Wed Dec 11 14:02:00 2024 00:21:08.687 read: IOPS=236, BW=29.5MiB/s (30.9MB/s)(148MiB/5006msec) 00:21:08.687 slat (nsec): min=8092, max=44211, avg=14916.04, stdev=4071.03 00:21:08.687 clat (usec): min=8475, max=20169, avg=12671.49, stdev=1045.98 00:21:08.687 lat (usec): min=8489, max=20182, avg=12686.41, stdev=1045.73 00:21:08.687 clat percentiles (usec): 00:21:08.687 | 1.00th=[11338], 5.00th=[11994], 10.00th=[12125], 20.00th=[12256], 00:21:08.687 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:21:08.687 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13042], 95.00th=[15795], 00:21:08.687 | 99.00th=[16188], 99.50th=[19792], 99.90th=[20055], 99.95th=[20055], 00:21:08.687 | 99.99th=[20055] 00:21:08.687 bw ( KiB/s): min=26880, max=31488, per=33.29%, avg=30182.40, stdev=1494.92, samples=10 00:21:08.687 iops : min= 210, max= 246, avg=235.80, stdev=11.68, samples=10 00:21:08.687 lat (msec) : 10=0.25%, 20=99.49%, 50=0.25% 00:21:08.687 cpu : usr=91.07%, sys=8.37%, ctx=4, majf=0, minf=0 00:21:08.687 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:08.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.687 issued rwts: total=1182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:08.687 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:08.687 filename0: (groupid=0, jobs=1): err= 0: pid=84668: Wed Dec 11 14:02:00 2024 00:21:08.687 read: IOPS=236, BW=29.5MiB/s (30.9MB/s)(148MiB/5007msec) 00:21:08.687 slat (nsec): min=4716, max=42912, avg=11496.48, stdev=4903.85 00:21:08.687 clat (usec): min=10598, max=21311, avg=12678.79, stdev=1013.55 00:21:08.687 lat (usec): min=10602, max=21328, avg=12690.29, stdev=1013.40 00:21:08.687 clat percentiles (usec): 00:21:08.687 | 1.00th=[11469], 5.00th=[11863], 10.00th=[12125], 20.00th=[12256], 00:21:08.687 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:21:08.687 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13173], 95.00th=[15795], 00:21:08.687 | 99.00th=[16188], 99.50th=[17433], 99.90th=[21365], 99.95th=[21365], 00:21:08.687 | 99.99th=[21365] 00:21:08.687 bw ( KiB/s): min=26112, max=31488, per=33.29%, avg=30182.40, stdev=1621.11, samples=10 00:21:08.687 iops : min= 204, max= 246, avg=235.80, stdev=12.66, samples=10 00:21:08.687 lat (msec) : 20=99.75%, 50=0.25% 00:21:08.687 cpu : usr=90.67%, sys=8.71%, ctx=8, majf=0, minf=0 00:21:08.687 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:08.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.687 issued rwts: total=1182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:08.687 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:08.687 00:21:08.687 Run status group 0 (all jobs): 00:21:08.687 READ: bw=88.5MiB/s (92.8MB/s), 29.5MiB/s-29.5MiB/s (30.9MB/s-30.9MB/s), io=443MiB (465MB), run=5006-5007msec 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:08.687 bdev_null0 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.687 14:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:08.687 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.687 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:08.687 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.687 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:08.687 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.687 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:08.687 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.687 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:08.687 [2024-12-11 14:02:01.018940] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:08.687 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.687 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:08.687 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:08.687 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:08.687 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:21:08.687 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.687 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:08.687 bdev_null1 00:21:08.687 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.687 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:08.687 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:08.688 bdev_null2 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:08.688 { 00:21:08.688 "params": { 00:21:08.688 "name": "Nvme$subsystem", 00:21:08.688 "trtype": "$TEST_TRANSPORT", 00:21:08.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.688 "adrfam": "ipv4", 00:21:08.688 "trsvcid": "$NVMF_PORT", 00:21:08.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.688 "hdgst": ${hdgst:-false}, 00:21:08.688 "ddgst": ${ddgst:-false} 00:21:08.688 }, 00:21:08.688 "method": "bdev_nvme_attach_controller" 00:21:08.688 } 00:21:08.688 EOF 00:21:08.688 )") 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:08.688 { 00:21:08.688 "params": { 00:21:08.688 "name": "Nvme$subsystem", 00:21:08.688 "trtype": "$TEST_TRANSPORT", 00:21:08.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.688 "adrfam": "ipv4", 00:21:08.688 "trsvcid": "$NVMF_PORT", 00:21:08.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.688 "hdgst": ${hdgst:-false}, 00:21:08.688 "ddgst": ${ddgst:-false} 00:21:08.688 }, 00:21:08.688 "method": "bdev_nvme_attach_controller" 00:21:08.688 } 00:21:08.688 EOF 00:21:08.688 )") 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:08.688 { 00:21:08.688 "params": { 00:21:08.688 "name": "Nvme$subsystem", 00:21:08.688 "trtype": "$TEST_TRANSPORT", 00:21:08.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.688 "adrfam": "ipv4", 00:21:08.688 "trsvcid": "$NVMF_PORT", 00:21:08.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.688 "hdgst": ${hdgst:-false}, 00:21:08.688 "ddgst": ${ddgst:-false} 00:21:08.688 }, 00:21:08.688 "method": "bdev_nvme_attach_controller" 00:21:08.688 } 00:21:08.688 EOF 00:21:08.688 )") 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:08.688 "params": { 00:21:08.688 "name": "Nvme0", 00:21:08.688 "trtype": "tcp", 00:21:08.688 "traddr": "10.0.0.3", 00:21:08.688 "adrfam": "ipv4", 00:21:08.688 "trsvcid": "4420", 00:21:08.688 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:08.688 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:08.688 "hdgst": false, 00:21:08.688 "ddgst": false 00:21:08.688 }, 00:21:08.688 "method": "bdev_nvme_attach_controller" 00:21:08.688 },{ 00:21:08.688 "params": { 00:21:08.688 "name": "Nvme1", 00:21:08.688 "trtype": "tcp", 00:21:08.688 "traddr": "10.0.0.3", 00:21:08.688 "adrfam": "ipv4", 00:21:08.688 "trsvcid": "4420", 00:21:08.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:08.688 "hdgst": false, 00:21:08.688 "ddgst": false 00:21:08.688 }, 00:21:08.688 "method": "bdev_nvme_attach_controller" 00:21:08.688 },{ 00:21:08.688 "params": { 00:21:08.688 "name": "Nvme2", 00:21:08.688 "trtype": "tcp", 00:21:08.688 "traddr": "10.0.0.3", 00:21:08.688 "adrfam": "ipv4", 00:21:08.688 "trsvcid": "4420", 00:21:08.688 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:08.688 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:08.688 "hdgst": false, 00:21:08.688 "ddgst": false 00:21:08.688 }, 00:21:08.688 "method": "bdev_nvme_attach_controller" 00:21:08.688 }' 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:08.688 14:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:08.688 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:08.688 ... 00:21:08.688 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:08.689 ... 00:21:08.689 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:08.689 ... 00:21:08.689 fio-3.35 00:21:08.689 Starting 24 threads 00:21:20.895 00:21:20.895 filename0: (groupid=0, jobs=1): err= 0: pid=84764: Wed Dec 11 14:02:12 2024 00:21:20.895 read: IOPS=237, BW=950KiB/s (973kB/s)(9540KiB/10039msec) 00:21:20.895 slat (usec): min=4, max=8040, avg=38.27, stdev=344.81 00:21:20.895 clat (msec): min=23, max=139, avg=67.09, stdev=17.87 00:21:20.895 lat (msec): min=23, max=142, avg=67.13, stdev=17.88 00:21:20.895 clat percentiles (msec): 00:21:20.895 | 1.00th=[ 35], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 50], 00:21:20.895 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:21:20.895 | 70.00th=[ 77], 80.00th=[ 82], 90.00th=[ 87], 95.00th=[ 96], 00:21:20.895 | 99.00th=[ 120], 99.50th=[ 126], 99.90th=[ 140], 99.95th=[ 140], 00:21:20.895 | 99.99th=[ 140] 00:21:20.895 bw ( KiB/s): min= 720, max= 1144, per=4.31%, avg=950.40, stdev=88.27, samples=20 00:21:20.895 iops : min= 180, max= 286, avg=237.60, stdev=22.07, samples=20 00:21:20.895 lat (msec) : 50=22.89%, 100=72.75%, 250=4.36% 00:21:20.895 cpu : usr=35.27%, sys=1.64%, ctx=1176, majf=0, minf=9 00:21:20.895 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.6%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:20.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.895 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.895 issued rwts: total=2385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.895 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.895 filename0: (groupid=0, jobs=1): err= 0: pid=84765: Wed Dec 11 14:02:12 2024 00:21:20.895 read: IOPS=232, BW=929KiB/s (952kB/s)(9308KiB/10016msec) 00:21:20.895 slat (usec): min=5, max=6782, avg=28.25, stdev=205.40 00:21:20.895 clat (msec): min=23, max=136, avg=68.72, stdev=17.28 00:21:20.895 lat (msec): min=23, max=136, avg=68.75, stdev=17.28 00:21:20.895 clat percentiles (msec): 00:21:20.895 | 1.00th=[ 33], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 52], 00:21:20.895 | 30.00th=[ 56], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 73], 00:21:20.895 | 70.00th=[ 80], 80.00th=[ 82], 90.00th=[ 89], 95.00th=[ 96], 00:21:20.895 | 99.00th=[ 117], 99.50th=[ 122], 99.90th=[ 136], 99.95th=[ 136], 00:21:20.895 | 99.99th=[ 136] 00:21:20.895 bw ( KiB/s): min= 720, max= 1048, per=4.19%, avg=923.05, stdev=89.32, samples=19 00:21:20.895 iops : min= 180, max= 262, avg=230.74, stdev=22.31, samples=19 00:21:20.895 lat (msec) : 50=16.89%, 100=78.99%, 250=4.13% 00:21:20.895 cpu : usr=42.05%, sys=1.98%, ctx=1416, majf=0, minf=9 00:21:20.895 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=80.0%, 16=15.3%, 32=0.0%, >=64=0.0% 00:21:20.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.895 complete : 0=0.0%, 4=87.8%, 8=11.4%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.895 issued rwts: total=2327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.895 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.895 filename0: (groupid=0, jobs=1): err= 0: pid=84766: Wed Dec 11 14:02:12 2024 00:21:20.895 read: IOPS=237, BW=951KiB/s (974kB/s)(9544KiB/10032msec) 00:21:20.895 slat (usec): min=4, max=8027, avg=27.55, stdev=210.32 00:21:20.895 clat (msec): min=26, max=139, avg=67.10, stdev=17.54 00:21:20.895 lat (msec): min=26, max=139, avg=67.13, stdev=17.54 00:21:20.895 clat percentiles (msec): 00:21:20.895 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 49], 00:21:20.895 | 30.00th=[ 54], 40.00th=[ 60], 50.00th=[ 70], 60.00th=[ 72], 00:21:20.895 | 70.00th=[ 77], 80.00th=[ 83], 90.00th=[ 87], 95.00th=[ 96], 00:21:20.895 | 99.00th=[ 120], 99.50th=[ 124], 99.90th=[ 140], 99.95th=[ 140], 00:21:20.895 | 99.99th=[ 140] 00:21:20.895 bw ( KiB/s): min= 712, max= 1024, per=4.31%, avg=949.60, stdev=75.92, samples=20 00:21:20.895 iops : min= 178, max= 256, avg=237.35, stdev=18.96, samples=20 00:21:20.895 lat (msec) : 50=23.55%, 100=72.63%, 250=3.81% 00:21:20.895 cpu : usr=37.55%, sys=1.61%, ctx=1287, majf=0, minf=9 00:21:20.895 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.5%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:20.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.895 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.895 issued rwts: total=2386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.895 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.895 filename0: (groupid=0, jobs=1): err= 0: pid=84767: Wed Dec 11 14:02:12 2024 00:21:20.895 read: IOPS=242, BW=971KiB/s (995kB/s)(9716KiB/10003msec) 00:21:20.895 slat (usec): min=6, max=16034, avg=35.93, stdev=437.75 00:21:20.895 clat (usec): min=1011, max=174349, avg=65733.52, stdev=21961.70 00:21:20.895 lat (usec): min=1018, max=174370, avg=65769.45, stdev=21985.11 00:21:20.895 clat percentiles (usec): 00:21:20.895 | 1.00th=[ 1713], 5.00th=[ 35390], 10.00th=[ 46400], 20.00th=[ 47973], 00:21:20.895 | 30.00th=[ 52691], 40.00th=[ 59507], 50.00th=[ 70779], 60.00th=[ 71828], 00:21:20.895 | 70.00th=[ 76022], 80.00th=[ 82314], 90.00th=[ 86508], 95.00th=[ 96994], 00:21:20.895 | 99.00th=[122160], 99.50th=[141558], 99.90th=[143655], 99.95th=[158335], 00:21:20.895 | 99.99th=[175113] 00:21:20.895 bw ( KiB/s): min= 664, max= 1080, per=4.24%, avg=934.32, stdev=99.26, samples=19 00:21:20.895 iops : min= 166, max= 270, avg=233.58, stdev=24.82, samples=19 00:21:20.895 lat (msec) : 2=1.15%, 4=2.10%, 50=22.44%, 100=69.74%, 250=4.57% 00:21:20.895 cpu : usr=34.64%, sys=1.40%, ctx=968, majf=0, minf=9 00:21:20.895 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.2%, 16=15.7%, 32=0.0%, >=64=0.0% 00:21:20.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.895 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.895 issued rwts: total=2429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.895 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.895 filename0: (groupid=0, jobs=1): err= 0: pid=84768: Wed Dec 11 14:02:12 2024 00:21:20.895 read: IOPS=229, BW=917KiB/s (939kB/s)(9212KiB/10047msec) 00:21:20.895 slat (usec): min=6, max=8037, avg=26.99, stdev=204.79 00:21:20.895 clat (msec): min=8, max=147, avg=69.60, stdev=18.94 00:21:20.895 lat (msec): min=8, max=147, avg=69.62, stdev=18.94 00:21:20.895 clat percentiles (msec): 00:21:20.895 | 1.00th=[ 20], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 52], 00:21:20.895 | 30.00th=[ 60], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 77], 00:21:20.895 | 70.00th=[ 81], 80.00th=[ 83], 90.00th=[ 91], 95.00th=[ 103], 00:21:20.895 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 127], 99.95th=[ 127], 00:21:20.895 | 99.99th=[ 148] 00:21:20.895 bw ( KiB/s): min= 744, max= 1288, per=4.15%, avg=914.90, stdev=116.47, samples=20 00:21:20.895 iops : min= 186, max= 322, avg=228.60, stdev=29.11, samples=20 00:21:20.895 lat (msec) : 10=0.61%, 20=0.78%, 50=15.41%, 100=77.55%, 250=5.64% 00:21:20.895 cpu : usr=37.31%, sys=1.56%, ctx=1141, majf=0, minf=9 00:21:20.895 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.8%, 16=16.5%, 32=0.0%, >=64=0.0% 00:21:20.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.895 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.895 issued rwts: total=2303,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.895 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.895 filename0: (groupid=0, jobs=1): err= 0: pid=84769: Wed Dec 11 14:02:12 2024 00:21:20.895 read: IOPS=237, BW=950KiB/s (973kB/s)(9544KiB/10042msec) 00:21:20.895 slat (usec): min=4, max=4070, avg=20.66, stdev=117.34 00:21:20.895 clat (msec): min=16, max=133, avg=67.15, stdev=18.08 00:21:20.895 lat (msec): min=16, max=133, avg=67.17, stdev=18.08 00:21:20.895 clat percentiles (msec): 00:21:20.895 | 1.00th=[ 30], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 50], 00:21:20.895 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 73], 00:21:20.895 | 70.00th=[ 79], 80.00th=[ 82], 90.00th=[ 87], 95.00th=[ 99], 00:21:20.895 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 134], 99.95th=[ 134], 00:21:20.895 | 99.99th=[ 134] 00:21:20.895 bw ( KiB/s): min= 696, max= 1290, per=4.31%, avg=949.70, stdev=117.94, samples=20 00:21:20.895 iops : min= 174, max= 322, avg=237.40, stdev=29.41, samples=20 00:21:20.895 lat (msec) : 20=0.38%, 50=20.79%, 100=74.10%, 250=4.74% 00:21:20.895 cpu : usr=43.64%, sys=1.74%, ctx=1278, majf=0, minf=9 00:21:20.895 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.4%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:20.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.895 complete : 0=0.0%, 4=87.0%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.896 issued rwts: total=2386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.896 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.896 filename0: (groupid=0, jobs=1): err= 0: pid=84770: Wed Dec 11 14:02:12 2024 00:21:20.896 read: IOPS=235, BW=943KiB/s (965kB/s)(9488KiB/10064msec) 00:21:20.896 slat (usec): min=7, max=8044, avg=36.88, stdev=385.64 00:21:20.896 clat (msec): min=22, max=131, avg=67.66, stdev=18.27 00:21:20.896 lat (msec): min=22, max=131, avg=67.70, stdev=18.28 00:21:20.896 clat percentiles (msec): 00:21:20.896 | 1.00th=[ 31], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 50], 00:21:20.896 | 30.00th=[ 56], 40.00th=[ 62], 50.00th=[ 72], 60.00th=[ 73], 00:21:20.896 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 88], 95.00th=[ 97], 00:21:20.896 | 99.00th=[ 120], 99.50th=[ 123], 99.90th=[ 132], 99.95th=[ 132], 00:21:20.896 | 99.99th=[ 132] 00:21:20.896 bw ( KiB/s): min= 696, max= 1240, per=4.28%, avg=942.10, stdev=106.61, samples=20 00:21:20.896 iops : min= 174, max= 310, avg=235.50, stdev=26.67, samples=20 00:21:20.896 lat (msec) : 50=22.13%, 100=73.48%, 250=4.38% 00:21:20.896 cpu : usr=36.94%, sys=1.40%, ctx=1265, majf=0, minf=9 00:21:20.896 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:20.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.896 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.896 issued rwts: total=2372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.896 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.896 filename0: (groupid=0, jobs=1): err= 0: pid=84771: Wed Dec 11 14:02:12 2024 00:21:20.896 read: IOPS=236, BW=944KiB/s (967kB/s)(9508KiB/10070msec) 00:21:20.896 slat (usec): min=3, max=8024, avg=20.03, stdev=183.86 00:21:20.896 clat (msec): min=2, max=131, avg=67.53, stdev=21.89 00:21:20.896 lat (msec): min=2, max=131, avg=67.55, stdev=21.89 00:21:20.896 clat percentiles (msec): 00:21:20.896 | 1.00th=[ 4], 5.00th=[ 25], 10.00th=[ 48], 20.00th=[ 50], 00:21:20.896 | 30.00th=[ 58], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 73], 00:21:20.896 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 88], 95.00th=[ 101], 00:21:20.896 | 99.00th=[ 120], 99.50th=[ 123], 99.90th=[ 132], 99.95th=[ 132], 00:21:20.896 | 99.99th=[ 132] 00:21:20.896 bw ( KiB/s): min= 760, max= 1968, per=4.30%, avg=946.40, stdev=250.05, samples=20 00:21:20.896 iops : min= 190, max= 492, avg=236.60, stdev=62.51, samples=20 00:21:20.896 lat (msec) : 4=2.02%, 10=1.35%, 20=1.01%, 50=16.32%, 100=74.17% 00:21:20.896 lat (msec) : 250=5.13% 00:21:20.896 cpu : usr=35.94%, sys=1.33%, ctx=993, majf=0, minf=0 00:21:20.896 IO depths : 1=0.1%, 2=0.5%, 4=1.5%, 8=81.5%, 16=16.4%, 32=0.0%, >=64=0.0% 00:21:20.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.896 complete : 0=0.0%, 4=87.8%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.896 issued rwts: total=2377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.896 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.896 filename1: (groupid=0, jobs=1): err= 0: pid=84772: Wed Dec 11 14:02:12 2024 00:21:20.896 read: IOPS=228, BW=915KiB/s (937kB/s)(9220KiB/10077msec) 00:21:20.896 slat (usec): min=6, max=8037, avg=30.78, stdev=344.53 00:21:20.896 clat (msec): min=3, max=139, avg=69.60, stdev=23.74 00:21:20.896 lat (msec): min=3, max=139, avg=69.64, stdev=23.74 00:21:20.896 clat percentiles (msec): 00:21:20.896 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 46], 20.00th=[ 54], 00:21:20.896 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:21:20.896 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:21:20.896 | 99.00th=[ 124], 99.50th=[ 133], 99.90th=[ 134], 99.95th=[ 134], 00:21:20.896 | 99.99th=[ 140] 00:21:20.896 bw ( KiB/s): min= 688, max= 2048, per=4.15%, avg=915.60, stdev=278.43, samples=20 00:21:20.896 iops : min= 172, max= 512, avg=228.90, stdev=69.61, samples=20 00:21:20.896 lat (msec) : 4=2.78%, 10=1.30%, 20=2.17%, 50=10.80%, 100=76.36% 00:21:20.896 lat (msec) : 250=6.59% 00:21:20.896 cpu : usr=35.89%, sys=1.33%, ctx=1287, majf=0, minf=0 00:21:20.896 IO depths : 1=0.2%, 2=1.7%, 4=6.4%, 8=76.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:20.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.896 complete : 0=0.0%, 4=89.2%, 8=9.4%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.896 issued rwts: total=2305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.896 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.896 filename1: (groupid=0, jobs=1): err= 0: pid=84773: Wed Dec 11 14:02:12 2024 00:21:20.896 read: IOPS=233, BW=936KiB/s (958kB/s)(9388KiB/10032msec) 00:21:20.896 slat (usec): min=7, max=15141, avg=30.70, stdev=390.00 00:21:20.896 clat (msec): min=23, max=132, avg=68.19, stdev=17.57 00:21:20.896 lat (msec): min=23, max=132, avg=68.22, stdev=17.58 00:21:20.896 clat percentiles (msec): 00:21:20.896 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:21:20.896 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 00:21:20.896 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 86], 95.00th=[ 97], 00:21:20.896 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 132], 99.95th=[ 132], 00:21:20.896 | 99.99th=[ 132] 00:21:20.896 bw ( KiB/s): min= 712, max= 1040, per=4.24%, avg=934.40, stdev=74.89, samples=20 00:21:20.896 iops : min= 178, max= 260, avg=233.60, stdev=18.72, samples=20 00:21:20.896 lat (msec) : 50=22.33%, 100=73.75%, 250=3.92% 00:21:20.896 cpu : usr=35.03%, sys=1.43%, ctx=1014, majf=0, minf=9 00:21:20.896 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.3%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:20.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.896 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.896 issued rwts: total=2347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.896 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.896 filename1: (groupid=0, jobs=1): err= 0: pid=84774: Wed Dec 11 14:02:12 2024 00:21:20.896 read: IOPS=228, BW=915KiB/s (937kB/s)(9192KiB/10046msec) 00:21:20.896 slat (usec): min=5, max=8030, avg=28.49, stdev=264.51 00:21:20.896 clat (msec): min=23, max=143, avg=69.74, stdev=18.09 00:21:20.896 lat (msec): min=23, max=143, avg=69.77, stdev=18.09 00:21:20.896 clat percentiles (msec): 00:21:20.896 | 1.00th=[ 33], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 51], 00:21:20.896 | 30.00th=[ 60], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 73], 00:21:20.896 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 93], 95.00th=[ 104], 00:21:20.896 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 144], 99.95th=[ 144], 00:21:20.896 | 99.99th=[ 144] 00:21:20.896 bw ( KiB/s): min= 744, max= 1080, per=4.14%, avg=912.80, stdev=98.59, samples=20 00:21:20.896 iops : min= 186, max= 270, avg=228.20, stdev=24.65, samples=20 00:21:20.896 lat (msec) : 50=19.10%, 100=74.28%, 250=6.61% 00:21:20.896 cpu : usr=36.73%, sys=1.19%, ctx=1086, majf=0, minf=9 00:21:20.896 IO depths : 1=0.1%, 2=1.2%, 4=5.0%, 8=78.5%, 16=15.2%, 32=0.0%, >=64=0.0% 00:21:20.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.896 complete : 0=0.0%, 4=88.2%, 8=10.7%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.896 issued rwts: total=2298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.896 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.896 filename1: (groupid=0, jobs=1): err= 0: pid=84775: Wed Dec 11 14:02:12 2024 00:21:20.896 read: IOPS=236, BW=946KiB/s (969kB/s)(9500KiB/10038msec) 00:21:20.896 slat (usec): min=5, max=4028, avg=18.47, stdev=82.89 00:21:20.896 clat (msec): min=22, max=128, avg=67.44, stdev=18.22 00:21:20.896 lat (msec): min=22, max=128, avg=67.46, stdev=18.22 00:21:20.896 clat percentiles (msec): 00:21:20.896 | 1.00th=[ 24], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 50], 00:21:20.896 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 73], 00:21:20.896 | 70.00th=[ 79], 80.00th=[ 82], 90.00th=[ 88], 95.00th=[ 99], 00:21:20.896 | 99.00th=[ 120], 99.50th=[ 125], 99.90th=[ 129], 99.95th=[ 129], 00:21:20.896 | 99.99th=[ 129] 00:21:20.896 bw ( KiB/s): min= 712, max= 1232, per=4.29%, avg=945.70, stdev=97.53, samples=20 00:21:20.896 iops : min= 178, max= 308, avg=236.40, stdev=24.39, samples=20 00:21:20.896 lat (msec) : 50=20.80%, 100=75.12%, 250=4.08% 00:21:20.896 cpu : usr=41.34%, sys=1.91%, ctx=1141, majf=0, minf=9 00:21:20.896 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.0%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:20.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.896 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.896 issued rwts: total=2375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.896 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.896 filename1: (groupid=0, jobs=1): err= 0: pid=84776: Wed Dec 11 14:02:12 2024 00:21:20.896 read: IOPS=226, BW=907KiB/s (929kB/s)(9120KiB/10054msec) 00:21:20.896 slat (usec): min=7, max=2024, avg=18.57, stdev=43.12 00:21:20.896 clat (msec): min=23, max=143, avg=70.38, stdev=17.46 00:21:20.896 lat (msec): min=23, max=143, avg=70.40, stdev=17.46 00:21:20.896 clat percentiles (msec): 00:21:20.896 | 1.00th=[ 35], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 52], 00:21:20.896 | 30.00th=[ 60], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:21:20.896 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 92], 95.00th=[ 97], 00:21:20.896 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 144], 00:21:20.896 | 99.99th=[ 144] 00:21:20.896 bw ( KiB/s): min= 672, max= 1080, per=4.11%, avg=905.10, stdev=97.14, samples=20 00:21:20.896 iops : min= 168, max= 270, avg=226.25, stdev=24.29, samples=20 00:21:20.896 lat (msec) : 50=18.42%, 100=77.59%, 250=3.99% 00:21:20.896 cpu : usr=35.50%, sys=1.56%, ctx=1152, majf=0, minf=9 00:21:20.896 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=78.8%, 16=15.3%, 32=0.0%, >=64=0.0% 00:21:20.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.896 complete : 0=0.0%, 4=88.2%, 8=10.8%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.896 issued rwts: total=2280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.896 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.896 filename1: (groupid=0, jobs=1): err= 0: pid=84777: Wed Dec 11 14:02:12 2024 00:21:20.896 read: IOPS=233, BW=932KiB/s (955kB/s)(9336KiB/10013msec) 00:21:20.896 slat (usec): min=5, max=8025, avg=24.35, stdev=195.71 00:21:20.896 clat (msec): min=27, max=141, avg=68.54, stdev=17.83 00:21:20.896 lat (msec): min=27, max=141, avg=68.57, stdev=17.82 00:21:20.896 clat percentiles (msec): 00:21:20.896 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 51], 00:21:20.896 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 73], 00:21:20.896 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 88], 95.00th=[ 96], 00:21:20.896 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 142], 99.95th=[ 142], 00:21:20.896 | 99.99th=[ 142] 00:21:20.896 bw ( KiB/s): min= 760, max= 1021, per=4.20%, avg=925.79, stdev=63.05, samples=19 00:21:20.896 iops : min= 190, max= 255, avg=231.42, stdev=15.73, samples=19 00:21:20.896 lat (msec) : 50=19.92%, 100=75.62%, 250=4.46% 00:21:20.896 cpu : usr=36.30%, sys=1.60%, ctx=1721, majf=0, minf=9 00:21:20.897 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:20.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.897 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.897 issued rwts: total=2334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.897 filename1: (groupid=0, jobs=1): err= 0: pid=84778: Wed Dec 11 14:02:12 2024 00:21:20.897 read: IOPS=230, BW=924KiB/s (946kB/s)(9280KiB/10046msec) 00:21:20.897 slat (usec): min=5, max=8022, avg=22.08, stdev=235.19 00:21:20.897 clat (msec): min=8, max=144, avg=69.11, stdev=19.66 00:21:20.897 lat (msec): min=8, max=144, avg=69.13, stdev=19.66 00:21:20.897 clat percentiles (msec): 00:21:20.897 | 1.00th=[ 20], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 51], 00:21:20.897 | 30.00th=[ 59], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:21:20.897 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 93], 95.00th=[ 100], 00:21:20.897 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 144], 00:21:20.897 | 99.99th=[ 144] 00:21:20.897 bw ( KiB/s): min= 744, max= 1424, per=4.18%, avg=921.70, stdev=144.76, samples=20 00:21:20.897 iops : min= 186, max= 356, avg=230.30, stdev=36.19, samples=20 00:21:20.897 lat (msec) : 10=0.60%, 20=1.51%, 50=17.16%, 100=75.99%, 250=4.74% 00:21:20.897 cpu : usr=34.92%, sys=1.72%, ctx=963, majf=0, minf=9 00:21:20.897 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.4%, 16=16.6%, 32=0.0%, >=64=0.0% 00:21:20.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.897 complete : 0=0.0%, 4=87.7%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.897 issued rwts: total=2320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.897 filename1: (groupid=0, jobs=1): err= 0: pid=84779: Wed Dec 11 14:02:12 2024 00:21:20.897 read: IOPS=229, BW=919KiB/s (941kB/s)(9232KiB/10046msec) 00:21:20.897 slat (usec): min=5, max=8022, avg=21.44, stdev=179.42 00:21:20.897 clat (msec): min=16, max=143, avg=69.46, stdev=18.36 00:21:20.897 lat (msec): min=16, max=143, avg=69.48, stdev=18.35 00:21:20.897 clat percentiles (msec): 00:21:20.897 | 1.00th=[ 23], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 52], 00:21:20.897 | 30.00th=[ 59], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 74], 00:21:20.897 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 88], 95.00th=[ 99], 00:21:20.897 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:21:20.897 | 99.99th=[ 144] 00:21:20.897 bw ( KiB/s): min= 688, max= 1248, per=4.17%, avg=918.40, stdev=109.34, samples=20 00:21:20.897 iops : min= 172, max= 312, avg=229.60, stdev=27.33, samples=20 00:21:20.897 lat (msec) : 20=0.17%, 50=16.77%, 100=78.64%, 250=4.42% 00:21:20.897 cpu : usr=36.36%, sys=1.16%, ctx=1149, majf=0, minf=9 00:21:20.897 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.8%, 16=16.6%, 32=0.0%, >=64=0.0% 00:21:20.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.897 complete : 0=0.0%, 4=87.5%, 8=12.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.897 issued rwts: total=2308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.897 filename2: (groupid=0, jobs=1): err= 0: pid=84780: Wed Dec 11 14:02:12 2024 00:21:20.897 read: IOPS=225, BW=903KiB/s (925kB/s)(9072KiB/10046msec) 00:21:20.897 slat (usec): min=7, max=8022, avg=24.18, stdev=252.44 00:21:20.897 clat (msec): min=8, max=131, avg=70.68, stdev=19.27 00:21:20.897 lat (msec): min=8, max=131, avg=70.71, stdev=19.27 00:21:20.897 clat percentiles (msec): 00:21:20.897 | 1.00th=[ 17], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 56], 00:21:20.897 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:21:20.897 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 92], 95.00th=[ 100], 00:21:20.897 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 132], 99.95th=[ 132], 00:21:20.897 | 99.99th=[ 132] 00:21:20.897 bw ( KiB/s): min= 688, max= 1296, per=4.09%, avg=900.85, stdev=121.57, samples=20 00:21:20.897 iops : min= 172, max= 324, avg=225.10, stdev=30.40, samples=20 00:21:20.897 lat (msec) : 10=0.62%, 20=1.41%, 50=14.02%, 100=79.45%, 250=4.50% 00:21:20.897 cpu : usr=36.01%, sys=1.63%, ctx=1028, majf=0, minf=9 00:21:20.897 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=81.1%, 16=16.7%, 32=0.0%, >=64=0.0% 00:21:20.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.897 complete : 0=0.0%, 4=88.1%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.897 issued rwts: total=2268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.897 filename2: (groupid=0, jobs=1): err= 0: pid=84781: Wed Dec 11 14:02:12 2024 00:21:20.897 read: IOPS=219, BW=876KiB/s (898kB/s)(8784KiB/10022msec) 00:21:20.897 slat (usec): min=7, max=4035, avg=24.14, stdev=148.51 00:21:20.897 clat (msec): min=22, max=139, avg=72.90, stdev=18.86 00:21:20.897 lat (msec): min=22, max=139, avg=72.92, stdev=18.86 00:21:20.897 clat percentiles (msec): 00:21:20.897 | 1.00th=[ 33], 5.00th=[ 46], 10.00th=[ 49], 20.00th=[ 55], 00:21:20.897 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 79], 00:21:20.897 | 70.00th=[ 81], 80.00th=[ 87], 90.00th=[ 97], 95.00th=[ 107], 00:21:20.897 | 99.00th=[ 120], 99.50th=[ 123], 99.90th=[ 140], 99.95th=[ 140], 00:21:20.897 | 99.99th=[ 140] 00:21:20.897 bw ( KiB/s): min= 768, max= 1024, per=3.96%, avg=872.00, stdev=103.30, samples=20 00:21:20.897 iops : min= 192, max= 256, avg=218.00, stdev=25.83, samples=20 00:21:20.897 lat (msec) : 50=13.25%, 100=77.73%, 250=9.02% 00:21:20.897 cpu : usr=42.54%, sys=1.77%, ctx=1318, majf=0, minf=9 00:21:20.897 IO depths : 1=0.1%, 2=2.0%, 4=8.2%, 8=75.1%, 16=14.7%, 32=0.0%, >=64=0.0% 00:21:20.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.897 complete : 0=0.0%, 4=89.1%, 8=9.1%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.897 issued rwts: total=2196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.897 filename2: (groupid=0, jobs=1): err= 0: pid=84782: Wed Dec 11 14:02:12 2024 00:21:20.897 read: IOPS=222, BW=888KiB/s (910kB/s)(8924KiB/10047msec) 00:21:20.897 slat (usec): min=4, max=4022, avg=16.71, stdev=85.16 00:21:20.897 clat (msec): min=9, max=131, avg=71.84, stdev=18.76 00:21:20.897 lat (msec): min=9, max=131, avg=71.86, stdev=18.76 00:21:20.897 clat percentiles (msec): 00:21:20.897 | 1.00th=[ 22], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 58], 00:21:20.897 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 75], 00:21:20.897 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 104], 00:21:20.897 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:21:20.897 | 99.99th=[ 132] 00:21:20.897 bw ( KiB/s): min= 768, max= 1264, per=4.02%, avg=886.10, stdev=107.62, samples=20 00:21:20.897 iops : min= 192, max= 316, avg=221.40, stdev=26.96, samples=20 00:21:20.897 lat (msec) : 10=0.63%, 20=0.09%, 50=12.82%, 100=79.65%, 250=6.81% 00:21:20.897 cpu : usr=35.90%, sys=1.56%, ctx=1104, majf=0, minf=9 00:21:20.897 IO depths : 1=0.1%, 2=1.4%, 4=5.7%, 8=77.1%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:20.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.897 complete : 0=0.0%, 4=89.0%, 8=9.8%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.897 issued rwts: total=2231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.897 filename2: (groupid=0, jobs=1): err= 0: pid=84783: Wed Dec 11 14:02:12 2024 00:21:20.897 read: IOPS=221, BW=886KiB/s (907kB/s)(8888KiB/10036msec) 00:21:20.897 slat (usec): min=4, max=8022, avg=22.19, stdev=240.28 00:21:20.897 clat (msec): min=22, max=155, avg=72.06, stdev=19.27 00:21:20.897 lat (msec): min=22, max=155, avg=72.09, stdev=19.27 00:21:20.897 clat percentiles (msec): 00:21:20.897 | 1.00th=[ 29], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 55], 00:21:20.897 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:21:20.897 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:21:20.897 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 138], 99.95th=[ 138], 00:21:20.897 | 99.99th=[ 157] 00:21:20.897 bw ( KiB/s): min= 640, max= 1149, per=4.01%, avg=884.35, stdev=114.56, samples=20 00:21:20.897 iops : min= 160, max= 287, avg=221.05, stdev=28.62, samples=20 00:21:20.897 lat (msec) : 50=16.47%, 100=77.95%, 250=5.58% 00:21:20.897 cpu : usr=34.69%, sys=1.60%, ctx=1099, majf=0, minf=9 00:21:20.897 IO depths : 1=0.1%, 2=1.3%, 4=5.2%, 8=77.8%, 16=15.7%, 32=0.0%, >=64=0.0% 00:21:20.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.897 complete : 0=0.0%, 4=88.7%, 8=10.2%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.897 issued rwts: total=2222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.897 filename2: (groupid=0, jobs=1): err= 0: pid=84784: Wed Dec 11 14:02:12 2024 00:21:20.897 read: IOPS=227, BW=910KiB/s (932kB/s)(9116KiB/10020msec) 00:21:20.897 slat (usec): min=5, max=9029, avg=38.91, stdev=394.45 00:21:20.897 clat (msec): min=23, max=141, avg=70.17, stdev=19.42 00:21:20.897 lat (msec): min=23, max=141, avg=70.21, stdev=19.41 00:21:20.897 clat percentiles (msec): 00:21:20.897 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:21:20.897 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 73], 00:21:20.897 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 109], 00:21:20.897 | 99.00th=[ 128], 99.50th=[ 131], 99.90th=[ 142], 99.95th=[ 142], 00:21:20.897 | 99.99th=[ 142] 00:21:20.897 bw ( KiB/s): min= 640, max= 1000, per=4.11%, avg=905.25, stdev=112.90, samples=20 00:21:20.897 iops : min= 160, max= 250, avg=226.30, stdev=28.24, samples=20 00:21:20.897 lat (msec) : 50=21.72%, 100=71.43%, 250=6.85% 00:21:20.897 cpu : usr=35.61%, sys=1.54%, ctx=1024, majf=0, minf=9 00:21:20.897 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=80.5%, 16=15.3%, 32=0.0%, >=64=0.0% 00:21:20.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.897 complete : 0=0.0%, 4=87.6%, 8=11.6%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.897 issued rwts: total=2279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.897 filename2: (groupid=0, jobs=1): err= 0: pid=84785: Wed Dec 11 14:02:12 2024 00:21:20.897 read: IOPS=234, BW=936KiB/s (959kB/s)(9408KiB/10050msec) 00:21:20.897 slat (usec): min=5, max=8048, avg=30.04, stdev=298.13 00:21:20.897 clat (msec): min=22, max=145, avg=68.18, stdev=17.90 00:21:20.897 lat (msec): min=22, max=145, avg=68.21, stdev=17.91 00:21:20.897 clat percentiles (msec): 00:21:20.897 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:21:20.897 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 00:21:20.897 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 88], 95.00th=[ 99], 00:21:20.897 | 99.00th=[ 120], 99.50th=[ 123], 99.90th=[ 146], 99.95th=[ 146], 00:21:20.897 | 99.99th=[ 146] 00:21:20.897 bw ( KiB/s): min= 744, max= 1104, per=4.24%, avg=934.30, stdev=77.52, samples=20 00:21:20.897 iops : min= 186, max= 276, avg=233.55, stdev=19.39, samples=20 00:21:20.897 lat (msec) : 50=21.81%, 100=73.55%, 250=4.63% 00:21:20.898 cpu : usr=35.87%, sys=1.79%, ctx=1083, majf=0, minf=9 00:21:20.898 IO depths : 1=0.1%, 2=0.3%, 4=1.4%, 8=82.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:21:20.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.898 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.898 issued rwts: total=2352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.898 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.898 filename2: (groupid=0, jobs=1): err= 0: pid=84786: Wed Dec 11 14:02:12 2024 00:21:20.898 read: IOPS=232, BW=928KiB/s (951kB/s)(9308KiB/10027msec) 00:21:20.898 slat (usec): min=3, max=8041, avg=35.37, stdev=332.29 00:21:20.898 clat (msec): min=23, max=141, avg=68.82, stdev=17.40 00:21:20.898 lat (msec): min=23, max=141, avg=68.85, stdev=17.41 00:21:20.898 clat percentiles (msec): 00:21:20.898 | 1.00th=[ 33], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:21:20.898 | 30.00th=[ 57], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 73], 00:21:20.898 | 70.00th=[ 79], 80.00th=[ 83], 90.00th=[ 87], 95.00th=[ 99], 00:21:20.898 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 142], 99.95th=[ 142], 00:21:20.898 | 99.99th=[ 142] 00:21:20.898 bw ( KiB/s): min= 720, max= 992, per=4.20%, avg=924.40, stdev=59.78, samples=20 00:21:20.898 iops : min= 180, max= 248, avg=231.10, stdev=14.95, samples=20 00:21:20.898 lat (msec) : 50=17.49%, 100=78.08%, 250=4.43% 00:21:20.898 cpu : usr=36.93%, sys=1.55%, ctx=1103, majf=0, minf=9 00:21:20.898 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.7%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:20.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.898 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.898 issued rwts: total=2327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.898 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.898 filename2: (groupid=0, jobs=1): err= 0: pid=84787: Wed Dec 11 14:02:12 2024 00:21:20.898 read: IOPS=207, BW=830KiB/s (850kB/s)(8312KiB/10009msec) 00:21:20.898 slat (usec): min=5, max=4042, avg=23.03, stdev=152.72 00:21:20.898 clat (msec): min=23, max=143, avg=76.90, stdev=18.59 00:21:20.898 lat (msec): min=23, max=143, avg=76.93, stdev=18.59 00:21:20.898 clat percentiles (msec): 00:21:20.898 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 62], 00:21:20.898 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 78], 60.00th=[ 81], 00:21:20.898 | 70.00th=[ 84], 80.00th=[ 88], 90.00th=[ 101], 95.00th=[ 108], 00:21:20.898 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:21:20.898 | 99.99th=[ 144] 00:21:20.898 bw ( KiB/s): min= 640, max= 992, per=3.72%, avg=820.84, stdev=101.36, samples=19 00:21:20.898 iops : min= 160, max= 248, avg=205.21, stdev=25.34, samples=19 00:21:20.898 lat (msec) : 50=9.00%, 100=81.47%, 250=9.53% 00:21:20.898 cpu : usr=42.00%, sys=1.94%, ctx=1067, majf=0, minf=9 00:21:20.898 IO depths : 1=0.1%, 2=3.1%, 4=12.4%, 8=70.1%, 16=14.3%, 32=0.0%, >=64=0.0% 00:21:20.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.898 complete : 0=0.0%, 4=90.7%, 8=6.6%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.898 issued rwts: total=2078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.898 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.898 00:21:20.898 Run status group 0 (all jobs): 00:21:20.898 READ: bw=21.5MiB/s (22.6MB/s), 830KiB/s-971KiB/s (850kB/s-995kB/s), io=217MiB (227MB), run=10003-10077msec 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.898 bdev_null0 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.898 [2024-12-11 14:02:12.520294] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.898 bdev_null1 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:21:20.898 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.899 { 00:21:20.899 "params": { 00:21:20.899 "name": "Nvme$subsystem", 00:21:20.899 "trtype": "$TEST_TRANSPORT", 00:21:20.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.899 "adrfam": "ipv4", 00:21:20.899 "trsvcid": "$NVMF_PORT", 00:21:20.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.899 "hdgst": ${hdgst:-false}, 00:21:20.899 "ddgst": ${ddgst:-false} 00:21:20.899 }, 00:21:20.899 "method": "bdev_nvme_attach_controller" 00:21:20.899 } 00:21:20.899 EOF 00:21:20.899 )") 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.899 { 00:21:20.899 "params": { 00:21:20.899 "name": "Nvme$subsystem", 00:21:20.899 "trtype": "$TEST_TRANSPORT", 00:21:20.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.899 "adrfam": "ipv4", 00:21:20.899 "trsvcid": "$NVMF_PORT", 00:21:20.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.899 "hdgst": ${hdgst:-false}, 00:21:20.899 "ddgst": ${ddgst:-false} 00:21:20.899 }, 00:21:20.899 "method": "bdev_nvme_attach_controller" 00:21:20.899 } 00:21:20.899 EOF 00:21:20.899 )") 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:20.899 "params": { 00:21:20.899 "name": "Nvme0", 00:21:20.899 "trtype": "tcp", 00:21:20.899 "traddr": "10.0.0.3", 00:21:20.899 "adrfam": "ipv4", 00:21:20.899 "trsvcid": "4420", 00:21:20.899 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:20.899 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:20.899 "hdgst": false, 00:21:20.899 "ddgst": false 00:21:20.899 }, 00:21:20.899 "method": "bdev_nvme_attach_controller" 00:21:20.899 },{ 00:21:20.899 "params": { 00:21:20.899 "name": "Nvme1", 00:21:20.899 "trtype": "tcp", 00:21:20.899 "traddr": "10.0.0.3", 00:21:20.899 "adrfam": "ipv4", 00:21:20.899 "trsvcid": "4420", 00:21:20.899 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.899 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:20.899 "hdgst": false, 00:21:20.899 "ddgst": false 00:21:20.899 }, 00:21:20.899 "method": "bdev_nvme_attach_controller" 00:21:20.899 }' 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:20.899 14:02:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:20.899 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:20.899 ... 00:21:20.899 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:20.899 ... 00:21:20.899 fio-3.35 00:21:20.899 Starting 4 threads 00:21:26.247 00:21:26.247 filename0: (groupid=0, jobs=1): err= 0: pid=84932: Wed Dec 11 14:02:18 2024 00:21:26.247 read: IOPS=2006, BW=15.7MiB/s (16.4MB/s)(78.5MiB/5004msec) 00:21:26.247 slat (nsec): min=6742, max=54647, avg=10007.30, stdev=3816.89 00:21:26.247 clat (usec): min=750, max=7288, avg=3957.09, stdev=773.12 00:21:26.247 lat (usec): min=758, max=7302, avg=3967.10, stdev=772.29 00:21:26.247 clat percentiles (usec): 00:21:26.247 | 1.00th=[ 3359], 5.00th=[ 3425], 10.00th=[ 3458], 20.00th=[ 3458], 00:21:26.247 | 30.00th=[ 3490], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3589], 00:21:26.247 | 70.00th=[ 3818], 80.00th=[ 5080], 90.00th=[ 5342], 95.00th=[ 5407], 00:21:26.248 | 99.00th=[ 5473], 99.50th=[ 5473], 99.90th=[ 5604], 99.95th=[ 5604], 00:21:26.248 | 99.99th=[ 7046] 00:21:26.248 bw ( KiB/s): min=15936, max=16320, per=25.09%, avg=16074.67, stdev=119.20, samples=9 00:21:26.248 iops : min= 1992, max= 2040, avg=2009.33, stdev=14.90, samples=9 00:21:26.248 lat (usec) : 1000=0.06% 00:21:26.248 lat (msec) : 2=0.36%, 4=73.00%, 10=26.58% 00:21:26.248 cpu : usr=91.35%, sys=7.72%, ctx=11, majf=0, minf=0 00:21:26.248 IO depths : 1=0.1%, 2=0.6%, 4=71.3%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:26.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.248 complete : 0=0.0%, 4=99.8%, 8=0.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.248 issued rwts: total=10042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:26.248 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:26.248 filename0: (groupid=0, jobs=1): err= 0: pid=84933: Wed Dec 11 14:02:18 2024 00:21:26.248 read: IOPS=2001, BW=15.6MiB/s (16.4MB/s)(78.2MiB/5002msec) 00:21:26.248 slat (nsec): min=7624, max=64824, avg=15324.68, stdev=4661.62 00:21:26.248 clat (usec): min=1398, max=7114, avg=3956.65, stdev=753.72 00:21:26.248 lat (usec): min=1407, max=7127, avg=3971.98, stdev=753.55 00:21:26.248 clat percentiles (usec): 00:21:26.248 | 1.00th=[ 3359], 5.00th=[ 3425], 10.00th=[ 3425], 20.00th=[ 3458], 00:21:26.248 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3589], 00:21:26.248 | 70.00th=[ 3818], 80.00th=[ 5080], 90.00th=[ 5342], 95.00th=[ 5342], 00:21:26.248 | 99.00th=[ 5407], 99.50th=[ 5473], 99.90th=[ 5538], 99.95th=[ 5538], 00:21:26.248 | 99.99th=[ 5669] 00:21:26.248 bw ( KiB/s): min=15696, max=16160, per=24.98%, avg=16003.56, stdev=139.66, samples=9 00:21:26.248 iops : min= 1962, max= 2020, avg=2000.44, stdev=17.46, samples=9 00:21:26.248 lat (msec) : 2=0.08%, 4=73.10%, 10=26.82% 00:21:26.248 cpu : usr=90.98%, sys=8.06%, ctx=7, majf=0, minf=10 00:21:26.248 IO depths : 1=0.1%, 2=0.7%, 4=71.3%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:26.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.248 complete : 0=0.0%, 4=99.7%, 8=0.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.248 issued rwts: total=10011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:26.248 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:26.248 filename1: (groupid=0, jobs=1): err= 0: pid=84934: Wed Dec 11 14:02:18 2024 00:21:26.248 read: IOPS=2001, BW=15.6MiB/s (16.4MB/s)(78.2MiB/5003msec) 00:21:26.248 slat (nsec): min=7030, max=58740, avg=11538.55, stdev=4866.57 00:21:26.248 clat (usec): min=1798, max=7067, avg=3964.79, stdev=754.33 00:21:26.248 lat (usec): min=1807, max=7081, avg=3976.33, stdev=754.26 00:21:26.248 clat percentiles (usec): 00:21:26.248 | 1.00th=[ 3359], 5.00th=[ 3425], 10.00th=[ 3458], 20.00th=[ 3458], 00:21:26.248 | 30.00th=[ 3490], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3589], 00:21:26.248 | 70.00th=[ 3818], 80.00th=[ 5080], 90.00th=[ 5342], 95.00th=[ 5407], 00:21:26.248 | 99.00th=[ 5473], 99.50th=[ 5473], 99.90th=[ 5538], 99.95th=[ 5538], 00:21:26.248 | 99.99th=[ 5669] 00:21:26.248 bw ( KiB/s): min=15792, max=16160, per=25.00%, avg=16016.00, stdev=116.48, samples=9 00:21:26.248 iops : min= 1974, max= 2020, avg=2002.00, stdev=14.56, samples=9 00:21:26.248 lat (msec) : 2=0.02%, 4=73.13%, 10=26.85% 00:21:26.248 cpu : usr=91.56%, sys=7.50%, ctx=6, majf=0, minf=9 00:21:26.248 IO depths : 1=0.1%, 2=0.7%, 4=71.2%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:26.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.248 complete : 0=0.0%, 4=99.7%, 8=0.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.248 issued rwts: total=10012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:26.248 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:26.248 filename1: (groupid=0, jobs=1): err= 0: pid=84935: Wed Dec 11 14:02:18 2024 00:21:26.248 read: IOPS=2001, BW=15.6MiB/s (16.4MB/s)(78.2MiB/5001msec) 00:21:26.248 slat (usec): min=3, max=200, avg=14.55, stdev= 5.43 00:21:26.248 clat (usec): min=1381, max=7105, avg=3957.68, stdev=756.43 00:21:26.248 lat (usec): min=1390, max=7119, avg=3972.23, stdev=755.08 00:21:26.248 clat percentiles (usec): 00:21:26.248 | 1.00th=[ 3359], 5.00th=[ 3425], 10.00th=[ 3425], 20.00th=[ 3458], 00:21:26.248 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3589], 00:21:26.248 | 70.00th=[ 3818], 80.00th=[ 5080], 90.00th=[ 5342], 95.00th=[ 5407], 00:21:26.248 | 99.00th=[ 5473], 99.50th=[ 5473], 99.90th=[ 5538], 99.95th=[ 5604], 00:21:26.248 | 99.99th=[ 5669] 00:21:26.248 bw ( KiB/s): min=15728, max=16160, per=24.98%, avg=16003.56, stdev=131.89, samples=9 00:21:26.248 iops : min= 1966, max= 2020, avg=2000.44, stdev=16.49, samples=9 00:21:26.248 lat (msec) : 2=0.11%, 4=72.98%, 10=26.91% 00:21:26.248 cpu : usr=89.98%, sys=8.68%, ctx=198, majf=0, minf=9 00:21:26.248 IO depths : 1=0.1%, 2=0.7%, 4=71.2%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:26.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.248 complete : 0=0.0%, 4=99.7%, 8=0.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.248 issued rwts: total=10009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:26.248 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:26.248 00:21:26.248 Run status group 0 (all jobs): 00:21:26.248 READ: bw=62.6MiB/s (65.6MB/s), 15.6MiB/s-15.7MiB/s (16.4MB/s-16.4MB/s), io=313MiB (328MB), run=5001-5004msec 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.248 ************************************ 00:21:26.248 END TEST fio_dif_rand_params 00:21:26.248 ************************************ 00:21:26.248 00:21:26.248 real 0m23.836s 00:21:26.248 user 2m4.233s 00:21:26.248 sys 0m7.638s 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:26.248 14:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:26.248 14:02:18 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:21:26.248 14:02:18 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:26.248 14:02:18 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:26.248 14:02:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:26.248 ************************************ 00:21:26.248 START TEST fio_dif_digest 00:21:26.248 ************************************ 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:26.248 bdev_null0 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.248 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:26.248 [2024-12-11 14:02:18.774999] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.249 { 00:21:26.249 "params": { 00:21:26.249 "name": "Nvme$subsystem", 00:21:26.249 "trtype": "$TEST_TRANSPORT", 00:21:26.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.249 "adrfam": "ipv4", 00:21:26.249 "trsvcid": "$NVMF_PORT", 00:21:26.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.249 "hdgst": ${hdgst:-false}, 00:21:26.249 "ddgst": ${ddgst:-false} 00:21:26.249 }, 00:21:26.249 "method": "bdev_nvme_attach_controller" 00:21:26.249 } 00:21:26.249 EOF 00:21:26.249 )") 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:26.249 "params": { 00:21:26.249 "name": "Nvme0", 00:21:26.249 "trtype": "tcp", 00:21:26.249 "traddr": "10.0.0.3", 00:21:26.249 "adrfam": "ipv4", 00:21:26.249 "trsvcid": "4420", 00:21:26.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:26.249 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:26.249 "hdgst": true, 00:21:26.249 "ddgst": true 00:21:26.249 }, 00:21:26.249 "method": "bdev_nvme_attach_controller" 00:21:26.249 }' 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:26.249 14:02:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:26.249 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:26.249 ... 00:21:26.249 fio-3.35 00:21:26.249 Starting 3 threads 00:21:38.455 00:21:38.455 filename0: (groupid=0, jobs=1): err= 0: pid=85041: Wed Dec 11 14:02:29 2024 00:21:38.455 read: IOPS=221, BW=27.7MiB/s (29.0MB/s)(277MiB/10004msec) 00:21:38.455 slat (nsec): min=7305, max=63172, avg=16074.20, stdev=5469.18 00:21:38.455 clat (usec): min=11134, max=23798, avg=13501.15, stdev=983.63 00:21:38.455 lat (usec): min=11142, max=23813, avg=13517.23, stdev=983.84 00:21:38.455 clat percentiles (usec): 00:21:38.455 | 1.00th=[12387], 5.00th=[12518], 10.00th=[12649], 20.00th=[12780], 00:21:38.455 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:21:38.455 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14484], 95.00th=[14877], 00:21:38.455 | 99.00th=[17171], 99.50th=[17695], 99.90th=[23725], 99.95th=[23725], 00:21:38.455 | 99.99th=[23725] 00:21:38.455 bw ( KiB/s): min=26112, max=29952, per=33.36%, avg=28375.58, stdev=1186.30, samples=19 00:21:38.455 iops : min= 204, max= 234, avg=221.68, stdev= 9.27, samples=19 00:21:38.455 lat (msec) : 20=99.86%, 50=0.14% 00:21:38.455 cpu : usr=91.27%, sys=8.19%, ctx=9, majf=0, minf=0 00:21:38.455 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:38.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.455 issued rwts: total=2217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.455 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:38.455 filename0: (groupid=0, jobs=1): err= 0: pid=85042: Wed Dec 11 14:02:29 2024 00:21:38.455 read: IOPS=221, BW=27.7MiB/s (29.0MB/s)(277MiB/10004msec) 00:21:38.455 slat (nsec): min=7322, max=58204, avg=16258.80, stdev=5271.39 00:21:38.455 clat (usec): min=12344, max=23812, avg=13500.30, stdev=965.82 00:21:38.455 lat (usec): min=12352, max=23827, avg=13516.56, stdev=966.03 00:21:38.455 clat percentiles (usec): 00:21:38.455 | 1.00th=[12387], 5.00th=[12518], 10.00th=[12649], 20.00th=[12780], 00:21:38.455 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:21:38.455 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14484], 95.00th=[14877], 00:21:38.455 | 99.00th=[17171], 99.50th=[17433], 99.90th=[23725], 99.95th=[23725], 00:21:38.455 | 99.99th=[23725] 00:21:38.455 bw ( KiB/s): min=26112, max=29952, per=33.36%, avg=28375.58, stdev=1186.30, samples=19 00:21:38.455 iops : min= 204, max= 234, avg=221.68, stdev= 9.27, samples=19 00:21:38.455 lat (msec) : 20=99.86%, 50=0.14% 00:21:38.455 cpu : usr=91.29%, sys=8.18%, ctx=76, majf=0, minf=0 00:21:38.455 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:38.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.455 issued rwts: total=2217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.455 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:38.455 filename0: (groupid=0, jobs=1): err= 0: pid=85043: Wed Dec 11 14:02:29 2024 00:21:38.455 read: IOPS=221, BW=27.7MiB/s (29.1MB/s)(278MiB/10012msec) 00:21:38.455 slat (nsec): min=7135, max=61540, avg=15438.43, stdev=5629.27 00:21:38.455 clat (usec): min=8066, max=23794, avg=13495.16, stdev=985.69 00:21:38.455 lat (usec): min=8073, max=23806, avg=13510.60, stdev=985.95 00:21:38.455 clat percentiles (usec): 00:21:38.455 | 1.00th=[12387], 5.00th=[12518], 10.00th=[12649], 20.00th=[12780], 00:21:38.455 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:21:38.455 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14484], 95.00th=[14877], 00:21:38.455 | 99.00th=[17171], 99.50th=[17433], 99.90th=[23725], 99.95th=[23725], 00:21:38.455 | 99.99th=[23725] 00:21:38.455 bw ( KiB/s): min=26112, max=29952, per=33.36%, avg=28377.60, stdev=1041.62, samples=20 00:21:38.455 iops : min= 204, max= 234, avg=221.70, stdev= 8.14, samples=20 00:21:38.455 lat (msec) : 10=0.14%, 20=99.73%, 50=0.14% 00:21:38.455 cpu : usr=90.80%, sys=8.67%, ctx=19, majf=0, minf=0 00:21:38.455 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:38.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.455 issued rwts: total=2220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.455 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:38.455 00:21:38.455 Run status group 0 (all jobs): 00:21:38.455 READ: bw=83.1MiB/s (87.1MB/s), 27.7MiB/s-27.7MiB/s (29.0MB/s-29.1MB/s), io=832MiB (872MB), run=10004-10012msec 00:21:38.455 14:02:29 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:38.455 14:02:29 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:38.455 14:02:29 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:38.455 14:02:29 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:38.455 14:02:29 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:38.455 14:02:29 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:38.455 14:02:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.455 14:02:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:38.455 14:02:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.455 14:02:29 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:38.455 14:02:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.455 14:02:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:38.455 14:02:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.455 ************************************ 00:21:38.455 END TEST fio_dif_digest 00:21:38.455 ************************************ 00:21:38.455 00:21:38.455 real 0m11.068s 00:21:38.455 user 0m28.030s 00:21:38.455 sys 0m2.802s 00:21:38.455 14:02:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.455 14:02:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:38.455 14:02:29 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:38.455 14:02:29 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:38.455 14:02:29 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:38.455 14:02:29 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:21:38.455 14:02:29 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:38.455 14:02:29 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:21:38.455 14:02:29 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:38.455 14:02:29 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:38.455 rmmod nvme_tcp 00:21:38.455 rmmod nvme_fabrics 00:21:38.455 rmmod nvme_keyring 00:21:38.455 14:02:29 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:38.455 14:02:29 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:21:38.455 14:02:29 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:21:38.455 14:02:29 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 84289 ']' 00:21:38.455 14:02:29 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 84289 00:21:38.455 14:02:29 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 84289 ']' 00:21:38.455 14:02:29 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 84289 00:21:38.455 14:02:29 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:21:38.455 14:02:29 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.455 14:02:29 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84289 00:21:38.455 killing process with pid 84289 00:21:38.455 14:02:29 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:38.455 14:02:29 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:38.455 14:02:29 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84289' 00:21:38.455 14:02:29 nvmf_dif -- common/autotest_common.sh@973 -- # kill 84289 00:21:38.455 14:02:29 nvmf_dif -- common/autotest_common.sh@978 -- # wait 84289 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:38.455 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:38.455 Waiting for block devices as requested 00:21:38.455 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:38.455 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:38.455 14:02:30 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.455 14:02:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:38.455 14:02:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.455 14:02:31 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:21:38.455 00:21:38.455 real 1m0.179s 00:21:38.455 user 3m48.478s 00:21:38.455 sys 0m19.646s 00:21:38.455 14:02:31 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.455 ************************************ 00:21:38.455 END TEST nvmf_dif 00:21:38.455 ************************************ 00:21:38.455 14:02:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:38.455 14:02:31 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:38.455 14:02:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:38.455 14:02:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.455 14:02:31 -- common/autotest_common.sh@10 -- # set +x 00:21:38.455 ************************************ 00:21:38.455 START TEST nvmf_abort_qd_sizes 00:21:38.455 ************************************ 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:38.455 * Looking for test storage... 00:21:38.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:38.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.455 --rc genhtml_branch_coverage=1 00:21:38.455 --rc genhtml_function_coverage=1 00:21:38.455 --rc genhtml_legend=1 00:21:38.455 --rc geninfo_all_blocks=1 00:21:38.455 --rc geninfo_unexecuted_blocks=1 00:21:38.455 00:21:38.455 ' 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:38.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.455 --rc genhtml_branch_coverage=1 00:21:38.455 --rc genhtml_function_coverage=1 00:21:38.455 --rc genhtml_legend=1 00:21:38.455 --rc geninfo_all_blocks=1 00:21:38.455 --rc geninfo_unexecuted_blocks=1 00:21:38.455 00:21:38.455 ' 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:38.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.455 --rc genhtml_branch_coverage=1 00:21:38.455 --rc genhtml_function_coverage=1 00:21:38.455 --rc genhtml_legend=1 00:21:38.455 --rc geninfo_all_blocks=1 00:21:38.455 --rc geninfo_unexecuted_blocks=1 00:21:38.455 00:21:38.455 ' 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:38.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.455 --rc genhtml_branch_coverage=1 00:21:38.455 --rc genhtml_function_coverage=1 00:21:38.455 --rc genhtml_legend=1 00:21:38.455 --rc geninfo_all_blocks=1 00:21:38.455 --rc geninfo_unexecuted_blocks=1 00:21:38.455 00:21:38.455 ' 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:38.455 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:38.455 Cannot find device "nvmf_init_br" 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:38.455 Cannot find device "nvmf_init_br2" 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:38.455 Cannot find device "nvmf_tgt_br" 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:38.455 Cannot find device "nvmf_tgt_br2" 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:38.455 Cannot find device "nvmf_init_br" 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:38.455 Cannot find device "nvmf_init_br2" 00:21:38.455 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:21:38.456 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:38.456 Cannot find device "nvmf_tgt_br" 00:21:38.456 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:21:38.456 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:38.456 Cannot find device "nvmf_tgt_br2" 00:21:38.456 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:21:38.456 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:38.456 Cannot find device "nvmf_br" 00:21:38.456 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:21:38.456 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:38.456 Cannot find device "nvmf_init_if" 00:21:38.456 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:21:38.456 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:38.456 Cannot find device "nvmf_init_if2" 00:21:38.456 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:21:38.456 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:38.456 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:38.456 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:21:38.456 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:38.456 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:38.456 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:21:38.456 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:38.456 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:38.456 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:38.456 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:38.456 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:38.456 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:38.714 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:38.714 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:21:38.714 00:21:38.714 --- 10.0.0.3 ping statistics --- 00:21:38.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.714 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:38.714 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:38.714 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:21:38.714 00:21:38.714 --- 10.0.0.4 ping statistics --- 00:21:38.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.714 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:38.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:21:38.714 00:21:38.714 --- 10.0.0.1 ping statistics --- 00:21:38.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.714 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:38.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:21:38.714 00:21:38.714 --- 10.0.0.2 ping statistics --- 00:21:38.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.714 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:21:38.714 14:02:31 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:39.282 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:39.540 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:39.540 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:39.540 14:02:32 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.540 14:02:32 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:39.540 14:02:32 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:39.540 14:02:32 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.540 14:02:32 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:39.540 14:02:32 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:39.540 14:02:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:39.540 14:02:32 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:39.540 14:02:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:39.540 14:02:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:39.540 14:02:32 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=85695 00:21:39.540 14:02:32 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:39.540 14:02:32 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 85695 00:21:39.540 14:02:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 85695 ']' 00:21:39.540 14:02:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.540 14:02:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:39.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.540 14:02:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.540 14:02:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:39.540 14:02:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:39.540 [2024-12-11 14:02:32.574133] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:21:39.540 [2024-12-11 14:02:32.574254] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.799 [2024-12-11 14:02:32.733675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:39.799 [2024-12-11 14:02:32.796236] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.799 [2024-12-11 14:02:32.796537] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.799 [2024-12-11 14:02:32.796795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.799 [2024-12-11 14:02:32.796938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.799 [2024-12-11 14:02:32.797155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.799 [2024-12-11 14:02:32.798492] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.799 [2024-12-11 14:02:32.798579] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.799 [2024-12-11 14:02:32.798725] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:39.799 [2024-12-11 14:02:32.798746] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.058 [2024-12-11 14:02:32.858876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:40.058 14:02:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:40.058 14:02:33 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:40.058 14:02:33 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:21:40.058 14:02:33 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:40.058 14:02:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:40.058 14:02:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:40.058 14:02:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:40.058 14:02:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:40.058 14:02:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:40.058 14:02:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:40.058 ************************************ 00:21:40.058 START TEST spdk_target_abort 00:21:40.058 ************************************ 00:21:40.058 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:21:40.058 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:40.058 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:40.058 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.058 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:40.058 spdk_targetn1 00:21:40.058 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.058 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:40.058 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.058 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:40.058 [2024-12-11 14:02:33.086322] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.058 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.058 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:40.058 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.058 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:40.316 [2024-12-11 14:02:33.122282] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:40.316 14:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:43.601 Initializing NVMe Controllers 00:21:43.601 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:43.601 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:43.601 Initialization complete. Launching workers. 00:21:43.601 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10239, failed: 0 00:21:43.601 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1070, failed to submit 9169 00:21:43.601 success 869, unsuccessful 201, failed 0 00:21:43.601 14:02:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:43.601 14:02:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:46.885 Initializing NVMe Controllers 00:21:46.885 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:46.885 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:46.885 Initialization complete. Launching workers. 00:21:46.885 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8912, failed: 0 00:21:46.885 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1197, failed to submit 7715 00:21:46.885 success 378, unsuccessful 819, failed 0 00:21:46.885 14:02:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:46.885 14:02:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:50.167 Initializing NVMe Controllers 00:21:50.167 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:50.167 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:50.167 Initialization complete. Launching workers. 00:21:50.167 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31506, failed: 0 00:21:50.167 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2274, failed to submit 29232 00:21:50.167 success 478, unsuccessful 1796, failed 0 00:21:50.167 14:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:50.167 14:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.167 14:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:50.167 14:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.167 14:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:50.167 14:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.167 14:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:50.732 14:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.732 14:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 85695 00:21:50.732 14:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 85695 ']' 00:21:50.732 14:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 85695 00:21:50.732 14:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:21:50.732 14:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.732 14:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85695 00:21:50.732 killing process with pid 85695 00:21:50.732 14:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:50.732 14:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:50.732 14:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85695' 00:21:50.732 14:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 85695 00:21:50.732 14:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 85695 00:21:50.990 00:21:50.990 real 0m10.771s 00:21:50.990 user 0m40.987s 00:21:50.990 sys 0m2.065s 00:21:50.990 14:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:50.990 14:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:50.990 ************************************ 00:21:50.990 END TEST spdk_target_abort 00:21:50.991 ************************************ 00:21:50.991 14:02:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:50.991 14:02:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:50.991 14:02:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:50.991 14:02:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:50.991 ************************************ 00:21:50.991 START TEST kernel_target_abort 00:21:50.991 ************************************ 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:50.991 14:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:51.249 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:51.249 Waiting for block devices as requested 00:21:51.249 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:51.507 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:51.507 No valid GPT data, bailing 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:51.507 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:51.765 No valid GPT data, bailing 00:21:51.765 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:51.765 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:51.765 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:51.765 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:51.765 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:51.765 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:51.765 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:51.765 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:21:51.765 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:51.765 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:51.765 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:51.765 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:51.765 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:51.765 No valid GPT data, bailing 00:21:51.765 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:51.765 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:51.765 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:51.765 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:51.766 No valid GPT data, bailing 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 --hostid=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 -a 10.0.0.1 -t tcp -s 4420 00:21:51.766 00:21:51.766 Discovery Log Number of Records 2, Generation counter 2 00:21:51.766 =====Discovery Log Entry 0====== 00:21:51.766 trtype: tcp 00:21:51.766 adrfam: ipv4 00:21:51.766 subtype: current discovery subsystem 00:21:51.766 treq: not specified, sq flow control disable supported 00:21:51.766 portid: 1 00:21:51.766 trsvcid: 4420 00:21:51.766 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:51.766 traddr: 10.0.0.1 00:21:51.766 eflags: none 00:21:51.766 sectype: none 00:21:51.766 =====Discovery Log Entry 1====== 00:21:51.766 trtype: tcp 00:21:51.766 adrfam: ipv4 00:21:51.766 subtype: nvme subsystem 00:21:51.766 treq: not specified, sq flow control disable supported 00:21:51.766 portid: 1 00:21:51.766 trsvcid: 4420 00:21:51.766 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:51.766 traddr: 10.0.0.1 00:21:51.766 eflags: none 00:21:51.766 sectype: none 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:51.766 14:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:55.093 Initializing NVMe Controllers 00:21:55.093 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:55.093 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:55.093 Initialization complete. Launching workers. 00:21:55.093 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32488, failed: 0 00:21:55.093 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32488, failed to submit 0 00:21:55.093 success 0, unsuccessful 32488, failed 0 00:21:55.093 14:02:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:55.093 14:02:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:58.377 Initializing NVMe Controllers 00:21:58.377 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:58.377 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:58.377 Initialization complete. Launching workers. 00:21:58.377 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67403, failed: 0 00:21:58.377 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29396, failed to submit 38007 00:21:58.377 success 0, unsuccessful 29396, failed 0 00:21:58.377 14:02:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:58.377 14:02:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:01.662 Initializing NVMe Controllers 00:22:01.662 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:01.662 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:01.662 Initialization complete. Launching workers. 00:22:01.662 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78570, failed: 0 00:22:01.662 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19598, failed to submit 58972 00:22:01.662 success 0, unsuccessful 19598, failed 0 00:22:01.662 14:02:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:22:01.662 14:02:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:01.662 14:02:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:22:01.662 14:02:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:01.662 14:02:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:01.662 14:02:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:01.662 14:02:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:01.662 14:02:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:22:01.662 14:02:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:22:01.662 14:02:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:02.227 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:04.129 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:04.129 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:04.129 00:22:04.129 real 0m12.968s 00:22:04.129 user 0m5.992s 00:22:04.129 sys 0m4.465s 00:22:04.129 ************************************ 00:22:04.129 END TEST kernel_target_abort 00:22:04.129 ************************************ 00:22:04.129 14:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.129 14:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:04.129 14:02:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:04.129 14:02:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:22:04.129 14:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:04.129 14:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:22:04.129 14:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:04.129 14:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:22:04.129 14:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:04.129 14:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:04.129 rmmod nvme_tcp 00:22:04.129 rmmod nvme_fabrics 00:22:04.129 rmmod nvme_keyring 00:22:04.129 14:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:04.129 14:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:22:04.129 14:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:22:04.129 14:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 85695 ']' 00:22:04.129 14:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 85695 00:22:04.129 14:02:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 85695 ']' 00:22:04.129 14:02:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 85695 00:22:04.129 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (85695) - No such process 00:22:04.129 Process with pid 85695 is not found 00:22:04.129 14:02:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 85695 is not found' 00:22:04.129 14:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:22:04.129 14:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:04.388 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:04.388 Waiting for block devices as requested 00:22:04.388 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:04.646 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:04.646 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:04.646 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:04.646 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:22:04.646 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:22:04.646 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:04.646 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:22:04.646 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:04.646 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:04.646 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:04.646 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:04.646 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:04.646 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:04.646 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:04.646 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:04.646 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:04.646 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:04.646 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:04.905 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:04.905 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:04.905 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:04.905 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:04.905 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:04.905 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.905 14:02:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:04.905 14:02:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.905 14:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:22:04.905 00:22:04.905 real 0m26.753s 00:22:04.905 user 0m48.111s 00:22:04.905 sys 0m7.959s 00:22:04.905 14:02:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.905 14:02:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:04.905 ************************************ 00:22:04.905 END TEST nvmf_abort_qd_sizes 00:22:04.905 ************************************ 00:22:04.906 14:02:57 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:04.906 14:02:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:04.906 14:02:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:04.906 14:02:57 -- common/autotest_common.sh@10 -- # set +x 00:22:04.906 ************************************ 00:22:04.906 START TEST keyring_file 00:22:04.906 ************************************ 00:22:04.906 14:02:57 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:05.165 * Looking for test storage... 00:22:05.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:05.166 14:02:57 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:05.166 14:02:57 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:22:05.166 14:02:57 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:05.166 14:02:58 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@345 -- # : 1 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@353 -- # local d=1 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@355 -- # echo 1 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@353 -- # local d=2 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@355 -- # echo 2 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@368 -- # return 0 00:22:05.166 14:02:58 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:05.166 14:02:58 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:05.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.166 --rc genhtml_branch_coverage=1 00:22:05.166 --rc genhtml_function_coverage=1 00:22:05.166 --rc genhtml_legend=1 00:22:05.166 --rc geninfo_all_blocks=1 00:22:05.166 --rc geninfo_unexecuted_blocks=1 00:22:05.166 00:22:05.166 ' 00:22:05.166 14:02:58 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:05.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.166 --rc genhtml_branch_coverage=1 00:22:05.166 --rc genhtml_function_coverage=1 00:22:05.166 --rc genhtml_legend=1 00:22:05.166 --rc geninfo_all_blocks=1 00:22:05.166 --rc geninfo_unexecuted_blocks=1 00:22:05.166 00:22:05.166 ' 00:22:05.166 14:02:58 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:05.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.166 --rc genhtml_branch_coverage=1 00:22:05.166 --rc genhtml_function_coverage=1 00:22:05.166 --rc genhtml_legend=1 00:22:05.166 --rc geninfo_all_blocks=1 00:22:05.166 --rc geninfo_unexecuted_blocks=1 00:22:05.166 00:22:05.166 ' 00:22:05.166 14:02:58 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:05.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.166 --rc genhtml_branch_coverage=1 00:22:05.166 --rc genhtml_function_coverage=1 00:22:05.166 --rc genhtml_legend=1 00:22:05.166 --rc geninfo_all_blocks=1 00:22:05.166 --rc geninfo_unexecuted_blocks=1 00:22:05.166 00:22:05.166 ' 00:22:05.166 14:02:58 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:05.166 14:02:58 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.166 14:02:58 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.166 14:02:58 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.166 14:02:58 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.166 14:02:58 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.166 14:02:58 keyring_file -- paths/export.sh@5 -- # export PATH 00:22:05.166 14:02:58 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@51 -- # : 0 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:05.166 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:05.166 14:02:58 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:05.166 14:02:58 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:05.166 14:02:58 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:05.166 14:02:58 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:22:05.166 14:02:58 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:22:05.166 14:02:58 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:22:05.166 14:02:58 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:05.166 14:02:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:05.166 14:02:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:05.166 14:02:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:05.166 14:02:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:05.166 14:02:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:05.166 14:02:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uRlmlse2Sx 00:22:05.166 14:02:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:22:05.166 14:02:58 keyring_file -- nvmf/common.sh@733 -- # python - 00:22:05.166 14:02:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uRlmlse2Sx 00:22:05.166 14:02:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uRlmlse2Sx 00:22:05.166 14:02:58 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.uRlmlse2Sx 00:22:05.166 14:02:58 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:22:05.166 14:02:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:05.166 14:02:58 keyring_file -- keyring/common.sh@17 -- # name=key1 00:22:05.166 14:02:58 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:05.166 14:02:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:05.166 14:02:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:05.167 14:02:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SzimadEuDU 00:22:05.167 14:02:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:05.167 14:02:58 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:05.167 14:02:58 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:22:05.167 14:02:58 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:05.167 14:02:58 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:22:05.167 14:02:58 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:22:05.167 14:02:58 keyring_file -- nvmf/common.sh@733 -- # python - 00:22:05.426 14:02:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SzimadEuDU 00:22:05.426 14:02:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SzimadEuDU 00:22:05.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.426 14:02:58 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.SzimadEuDU 00:22:05.426 14:02:58 keyring_file -- keyring/file.sh@30 -- # tgtpid=86602 00:22:05.426 14:02:58 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:05.426 14:02:58 keyring_file -- keyring/file.sh@32 -- # waitforlisten 86602 00:22:05.426 14:02:58 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 86602 ']' 00:22:05.426 14:02:58 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.426 14:02:58 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.426 14:02:58 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.426 14:02:58 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.426 14:02:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:05.426 [2024-12-11 14:02:58.310930] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:22:05.426 [2024-12-11 14:02:58.311330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86602 ] 00:22:05.426 [2024-12-11 14:02:58.462249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.685 [2024-12-11 14:02:58.519054] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.685 [2024-12-11 14:02:58.594938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:22:05.945 14:02:58 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:05.945 [2024-12-11 14:02:58.805366] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.945 null0 00:22:05.945 [2024-12-11 14:02:58.837339] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:05.945 [2024-12-11 14:02:58.837518] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.945 14:02:58 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:05.945 [2024-12-11 14:02:58.865327] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:22:05.945 request: 00:22:05.945 { 00:22:05.945 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:22:05.945 "secure_channel": false, 00:22:05.945 "listen_address": { 00:22:05.945 "trtype": "tcp", 00:22:05.945 "traddr": "127.0.0.1", 00:22:05.945 "trsvcid": "4420" 00:22:05.945 }, 00:22:05.945 "method": "nvmf_subsystem_add_listener", 00:22:05.945 "req_id": 1 00:22:05.945 } 00:22:05.945 Got JSON-RPC error response 00:22:05.945 response: 00:22:05.945 { 00:22:05.945 "code": -32602, 00:22:05.945 "message": "Invalid parameters" 00:22:05.945 } 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:05.945 14:02:58 keyring_file -- keyring/file.sh@47 -- # bperfpid=86612 00:22:05.945 14:02:58 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:22:05.945 14:02:58 keyring_file -- keyring/file.sh@49 -- # waitforlisten 86612 /var/tmp/bperf.sock 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 86612 ']' 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:05.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.945 14:02:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:05.945 [2024-12-11 14:02:58.931049] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:22:05.945 [2024-12-11 14:02:58.931375] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86612 ] 00:22:06.204 [2024-12-11 14:02:59.083600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.204 [2024-12-11 14:02:59.138689] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.204 [2024-12-11 14:02:59.198290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:06.464 14:02:59 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.464 14:02:59 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:22:06.464 14:02:59 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uRlmlse2Sx 00:22:06.464 14:02:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uRlmlse2Sx 00:22:06.464 14:02:59 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.SzimadEuDU 00:22:06.464 14:02:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.SzimadEuDU 00:22:07.032 14:02:59 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:22:07.032 14:02:59 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:22:07.032 14:02:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:07.032 14:02:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:07.032 14:02:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:07.032 14:03:00 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.uRlmlse2Sx == \/\t\m\p\/\t\m\p\.\u\R\l\m\l\s\e\2\S\x ]] 00:22:07.032 14:03:00 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:22:07.032 14:03:00 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:22:07.032 14:03:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:07.032 14:03:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:07.032 14:03:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:07.290 14:03:00 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.SzimadEuDU == \/\t\m\p\/\t\m\p\.\S\z\i\m\a\d\E\u\D\U ]] 00:22:07.290 14:03:00 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:22:07.290 14:03:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:07.290 14:03:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:07.290 14:03:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:07.290 14:03:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:07.290 14:03:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:07.549 14:03:00 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:22:07.549 14:03:00 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:22:07.549 14:03:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:07.549 14:03:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:07.549 14:03:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:07.549 14:03:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:07.549 14:03:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:07.808 14:03:00 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:22:07.808 14:03:00 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:07.808 14:03:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:08.076 [2024-12-11 14:03:01.020469] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:08.076 nvme0n1 00:22:08.076 14:03:01 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:22:08.335 14:03:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:08.335 14:03:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:08.335 14:03:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:08.335 14:03:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:08.335 14:03:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:08.592 14:03:01 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:22:08.592 14:03:01 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:22:08.593 14:03:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:08.593 14:03:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:08.593 14:03:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:08.593 14:03:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:08.593 14:03:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:08.593 14:03:01 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:22:08.593 14:03:01 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:08.851 Running I/O for 1 seconds... 00:22:09.788 12883.00 IOPS, 50.32 MiB/s 00:22:09.788 Latency(us) 00:22:09.788 [2024-12-11T14:03:02.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.788 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:22:09.788 nvme0n1 : 1.01 12933.10 50.52 0.00 0.00 9871.86 4259.84 18350.08 00:22:09.788 [2024-12-11T14:03:02.835Z] =================================================================================================================== 00:22:09.788 [2024-12-11T14:03:02.835Z] Total : 12933.10 50.52 0.00 0.00 9871.86 4259.84 18350.08 00:22:09.788 { 00:22:09.788 "results": [ 00:22:09.788 { 00:22:09.788 "job": "nvme0n1", 00:22:09.788 "core_mask": "0x2", 00:22:09.788 "workload": "randrw", 00:22:09.788 "percentage": 50, 00:22:09.788 "status": "finished", 00:22:09.788 "queue_depth": 128, 00:22:09.788 "io_size": 4096, 00:22:09.788 "runtime": 1.006101, 00:22:09.788 "iops": 12933.095186268576, 00:22:09.788 "mibps": 50.519903071361625, 00:22:09.788 "io_failed": 0, 00:22:09.788 "io_timeout": 0, 00:22:09.788 "avg_latency_us": 9871.861368806416, 00:22:09.788 "min_latency_us": 4259.84, 00:22:09.788 "max_latency_us": 18350.08 00:22:09.788 } 00:22:09.788 ], 00:22:09.788 "core_count": 1 00:22:09.788 } 00:22:09.788 14:03:02 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:09.789 14:03:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:10.355 14:03:03 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:22:10.355 14:03:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:10.355 14:03:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:10.355 14:03:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:10.355 14:03:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:10.355 14:03:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:10.355 14:03:03 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:22:10.355 14:03:03 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:22:10.355 14:03:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:10.355 14:03:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:10.355 14:03:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:10.355 14:03:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:10.355 14:03:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:10.921 14:03:03 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:22:10.921 14:03:03 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:10.921 14:03:03 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:22:10.921 14:03:03 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:10.921 14:03:03 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:22:10.921 14:03:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:10.921 14:03:03 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:22:10.921 14:03:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:10.921 14:03:03 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:10.921 14:03:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:10.921 [2024-12-11 14:03:03.953468] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:10.921 [2024-12-11 14:03:03.954153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8bce0 (107): Transport endpoint is not connected 00:22:10.921 [2024-12-11 14:03:03.955141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8bce0 (9): Bad file descriptor 00:22:10.921 [2024-12-11 14:03:03.956137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:22:10.921 [2024-12-11 14:03:03.956179] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:10.921 [2024-12-11 14:03:03.956190] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:22:10.921 [2024-12-11 14:03:03.956200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:22:10.921 request: 00:22:10.921 { 00:22:10.921 "name": "nvme0", 00:22:10.921 "trtype": "tcp", 00:22:10.921 "traddr": "127.0.0.1", 00:22:10.921 "adrfam": "ipv4", 00:22:10.921 "trsvcid": "4420", 00:22:10.921 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:10.921 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:10.921 "prchk_reftag": false, 00:22:10.921 "prchk_guard": false, 00:22:10.921 "hdgst": false, 00:22:10.921 "ddgst": false, 00:22:10.921 "psk": "key1", 00:22:10.921 "allow_unrecognized_csi": false, 00:22:10.921 "method": "bdev_nvme_attach_controller", 00:22:10.921 "req_id": 1 00:22:10.921 } 00:22:10.921 Got JSON-RPC error response 00:22:10.921 response: 00:22:10.921 { 00:22:10.921 "code": -5, 00:22:10.921 "message": "Input/output error" 00:22:10.921 } 00:22:11.179 14:03:03 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:22:11.179 14:03:03 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:11.179 14:03:03 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:11.179 14:03:03 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:11.179 14:03:03 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:22:11.179 14:03:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:11.179 14:03:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:11.179 14:03:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:11.179 14:03:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:11.179 14:03:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:11.437 14:03:04 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:22:11.437 14:03:04 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:22:11.437 14:03:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:11.437 14:03:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:11.437 14:03:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:11.437 14:03:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:11.437 14:03:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:11.696 14:03:04 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:22:11.696 14:03:04 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:22:11.696 14:03:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:11.954 14:03:04 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:22:11.954 14:03:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:22:11.954 14:03:04 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:22:11.954 14:03:04 keyring_file -- keyring/file.sh@78 -- # jq length 00:22:11.954 14:03:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:12.214 14:03:05 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:22:12.214 14:03:05 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.uRlmlse2Sx 00:22:12.214 14:03:05 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.uRlmlse2Sx 00:22:12.214 14:03:05 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:22:12.214 14:03:05 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.uRlmlse2Sx 00:22:12.214 14:03:05 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:22:12.214 14:03:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.214 14:03:05 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:22:12.214 14:03:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.214 14:03:05 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uRlmlse2Sx 00:22:12.214 14:03:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uRlmlse2Sx 00:22:12.474 [2024-12-11 14:03:05.466469] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.uRlmlse2Sx': 0100660 00:22:12.474 [2024-12-11 14:03:05.466519] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:12.474 request: 00:22:12.474 { 00:22:12.474 "name": "key0", 00:22:12.474 "path": "/tmp/tmp.uRlmlse2Sx", 00:22:12.474 "method": "keyring_file_add_key", 00:22:12.474 "req_id": 1 00:22:12.474 } 00:22:12.474 Got JSON-RPC error response 00:22:12.474 response: 00:22:12.474 { 00:22:12.474 "code": -1, 00:22:12.474 "message": "Operation not permitted" 00:22:12.474 } 00:22:12.474 14:03:05 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:22:12.474 14:03:05 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:12.474 14:03:05 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:12.474 14:03:05 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:12.474 14:03:05 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.uRlmlse2Sx 00:22:12.474 14:03:05 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uRlmlse2Sx 00:22:12.474 14:03:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uRlmlse2Sx 00:22:13.040 14:03:05 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.uRlmlse2Sx 00:22:13.040 14:03:05 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:22:13.040 14:03:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:13.040 14:03:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:13.040 14:03:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:13.040 14:03:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:13.040 14:03:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:13.040 14:03:06 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:22:13.040 14:03:06 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:13.040 14:03:06 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:22:13.040 14:03:06 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:13.040 14:03:06 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:22:13.040 14:03:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.040 14:03:06 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:22:13.040 14:03:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.040 14:03:06 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:13.040 14:03:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:13.298 [2024-12-11 14:03:06.250602] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.uRlmlse2Sx': No such file or directory 00:22:13.298 [2024-12-11 14:03:06.250641] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:22:13.298 [2024-12-11 14:03:06.250686] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:22:13.298 [2024-12-11 14:03:06.250695] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:22:13.298 [2024-12-11 14:03:06.250704] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:13.298 [2024-12-11 14:03:06.250712] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:22:13.298 request: 00:22:13.298 { 00:22:13.298 "name": "nvme0", 00:22:13.298 "trtype": "tcp", 00:22:13.298 "traddr": "127.0.0.1", 00:22:13.298 "adrfam": "ipv4", 00:22:13.298 "trsvcid": "4420", 00:22:13.298 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:13.298 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:13.298 "prchk_reftag": false, 00:22:13.298 "prchk_guard": false, 00:22:13.298 "hdgst": false, 00:22:13.298 "ddgst": false, 00:22:13.298 "psk": "key0", 00:22:13.298 "allow_unrecognized_csi": false, 00:22:13.298 "method": "bdev_nvme_attach_controller", 00:22:13.298 "req_id": 1 00:22:13.298 } 00:22:13.298 Got JSON-RPC error response 00:22:13.298 response: 00:22:13.298 { 00:22:13.298 "code": -19, 00:22:13.298 "message": "No such device" 00:22:13.298 } 00:22:13.298 14:03:06 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:22:13.298 14:03:06 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:13.298 14:03:06 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:13.298 14:03:06 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:13.298 14:03:06 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:22:13.298 14:03:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:13.557 14:03:06 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:13.557 14:03:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:13.557 14:03:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:13.557 14:03:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:13.557 14:03:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:13.557 14:03:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:13.557 14:03:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HJiZBdKslf 00:22:13.557 14:03:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:13.557 14:03:06 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:13.557 14:03:06 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:22:13.557 14:03:06 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:13.557 14:03:06 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:13.557 14:03:06 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:22:13.557 14:03:06 keyring_file -- nvmf/common.sh@733 -- # python - 00:22:13.815 14:03:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HJiZBdKslf 00:22:13.815 14:03:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HJiZBdKslf 00:22:13.815 14:03:06 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.HJiZBdKslf 00:22:13.815 14:03:06 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HJiZBdKslf 00:22:13.815 14:03:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HJiZBdKslf 00:22:14.073 14:03:06 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:14.073 14:03:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:14.331 nvme0n1 00:22:14.331 14:03:07 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:22:14.331 14:03:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:14.331 14:03:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:14.331 14:03:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:14.331 14:03:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:14.331 14:03:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:14.589 14:03:07 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:22:14.589 14:03:07 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:22:14.589 14:03:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:14.847 14:03:07 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:22:14.847 14:03:07 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:22:14.847 14:03:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:14.847 14:03:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:14.847 14:03:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:15.105 14:03:08 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:22:15.105 14:03:08 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:22:15.105 14:03:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:15.105 14:03:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:15.105 14:03:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:15.105 14:03:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:15.105 14:03:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:15.362 14:03:08 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:22:15.362 14:03:08 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:15.362 14:03:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:15.620 14:03:08 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:22:15.620 14:03:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:15.620 14:03:08 keyring_file -- keyring/file.sh@105 -- # jq length 00:22:15.878 14:03:08 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:22:15.878 14:03:08 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HJiZBdKslf 00:22:15.878 14:03:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HJiZBdKslf 00:22:16.136 14:03:08 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.SzimadEuDU 00:22:16.136 14:03:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.SzimadEuDU 00:22:16.394 14:03:09 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:16.394 14:03:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:16.651 nvme0n1 00:22:16.651 14:03:09 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:22:16.651 14:03:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:22:16.909 14:03:09 keyring_file -- keyring/file.sh@113 -- # config='{ 00:22:16.909 "subsystems": [ 00:22:16.909 { 00:22:16.909 "subsystem": "keyring", 00:22:16.909 "config": [ 00:22:16.909 { 00:22:16.909 "method": "keyring_file_add_key", 00:22:16.909 "params": { 00:22:16.909 "name": "key0", 00:22:16.909 "path": "/tmp/tmp.HJiZBdKslf" 00:22:16.909 } 00:22:16.909 }, 00:22:16.909 { 00:22:16.909 "method": "keyring_file_add_key", 00:22:16.909 "params": { 00:22:16.909 "name": "key1", 00:22:16.909 "path": "/tmp/tmp.SzimadEuDU" 00:22:16.909 } 00:22:16.909 } 00:22:16.909 ] 00:22:16.909 }, 00:22:16.909 { 00:22:16.909 "subsystem": "iobuf", 00:22:16.909 "config": [ 00:22:16.909 { 00:22:16.909 "method": "iobuf_set_options", 00:22:16.909 "params": { 00:22:16.909 "small_pool_count": 8192, 00:22:16.909 "large_pool_count": 1024, 00:22:16.909 "small_bufsize": 8192, 00:22:16.909 "large_bufsize": 135168, 00:22:16.909 "enable_numa": false 00:22:16.909 } 00:22:16.909 } 00:22:16.909 ] 00:22:16.909 }, 00:22:16.909 { 00:22:16.909 "subsystem": "sock", 00:22:16.909 "config": [ 00:22:16.909 { 00:22:16.909 "method": "sock_set_default_impl", 00:22:16.909 "params": { 00:22:16.909 "impl_name": "uring" 00:22:16.909 } 00:22:16.909 }, 00:22:16.909 { 00:22:16.909 "method": "sock_impl_set_options", 00:22:16.909 "params": { 00:22:16.909 "impl_name": "ssl", 00:22:16.909 "recv_buf_size": 4096, 00:22:16.909 "send_buf_size": 4096, 00:22:16.909 "enable_recv_pipe": true, 00:22:16.909 "enable_quickack": false, 00:22:16.909 "enable_placement_id": 0, 00:22:16.909 "enable_zerocopy_send_server": true, 00:22:16.909 "enable_zerocopy_send_client": false, 00:22:16.909 "zerocopy_threshold": 0, 00:22:16.909 "tls_version": 0, 00:22:16.909 "enable_ktls": false 00:22:16.909 } 00:22:16.909 }, 00:22:16.909 { 00:22:16.909 "method": "sock_impl_set_options", 00:22:16.909 "params": { 00:22:16.909 "impl_name": "posix", 00:22:16.909 "recv_buf_size": 2097152, 00:22:16.909 "send_buf_size": 2097152, 00:22:16.909 "enable_recv_pipe": true, 00:22:16.910 "enable_quickack": false, 00:22:16.910 "enable_placement_id": 0, 00:22:16.910 "enable_zerocopy_send_server": true, 00:22:16.910 "enable_zerocopy_send_client": false, 00:22:16.910 "zerocopy_threshold": 0, 00:22:16.910 "tls_version": 0, 00:22:16.910 "enable_ktls": false 00:22:16.910 } 00:22:16.910 }, 00:22:16.910 { 00:22:16.910 "method": "sock_impl_set_options", 00:22:16.910 "params": { 00:22:16.910 "impl_name": "uring", 00:22:16.910 "recv_buf_size": 2097152, 00:22:16.910 "send_buf_size": 2097152, 00:22:16.910 "enable_recv_pipe": true, 00:22:16.910 "enable_quickack": false, 00:22:16.910 "enable_placement_id": 0, 00:22:16.910 "enable_zerocopy_send_server": false, 00:22:16.910 "enable_zerocopy_send_client": false, 00:22:16.910 "zerocopy_threshold": 0, 00:22:16.910 "tls_version": 0, 00:22:16.910 "enable_ktls": false 00:22:16.910 } 00:22:16.910 } 00:22:16.910 ] 00:22:16.910 }, 00:22:16.910 { 00:22:16.910 "subsystem": "vmd", 00:22:16.910 "config": [] 00:22:16.910 }, 00:22:16.910 { 00:22:16.910 "subsystem": "accel", 00:22:16.910 "config": [ 00:22:16.910 { 00:22:16.910 "method": "accel_set_options", 00:22:16.910 "params": { 00:22:16.910 "small_cache_size": 128, 00:22:16.910 "large_cache_size": 16, 00:22:16.910 "task_count": 2048, 00:22:16.910 "sequence_count": 2048, 00:22:16.910 "buf_count": 2048 00:22:16.910 } 00:22:16.910 } 00:22:16.910 ] 00:22:16.910 }, 00:22:16.910 { 00:22:16.910 "subsystem": "bdev", 00:22:16.910 "config": [ 00:22:16.910 { 00:22:16.910 "method": "bdev_set_options", 00:22:16.910 "params": { 00:22:16.910 "bdev_io_pool_size": 65535, 00:22:16.910 "bdev_io_cache_size": 256, 00:22:16.910 "bdev_auto_examine": true, 00:22:16.910 "iobuf_small_cache_size": 128, 00:22:16.910 "iobuf_large_cache_size": 16 00:22:16.910 } 00:22:16.910 }, 00:22:16.910 { 00:22:16.910 "method": "bdev_raid_set_options", 00:22:16.910 "params": { 00:22:16.910 "process_window_size_kb": 1024, 00:22:16.910 "process_max_bandwidth_mb_sec": 0 00:22:16.910 } 00:22:16.910 }, 00:22:16.910 { 00:22:16.910 "method": "bdev_iscsi_set_options", 00:22:16.910 "params": { 00:22:16.910 "timeout_sec": 30 00:22:16.910 } 00:22:16.910 }, 00:22:16.910 { 00:22:16.910 "method": "bdev_nvme_set_options", 00:22:16.910 "params": { 00:22:16.910 "action_on_timeout": "none", 00:22:16.910 "timeout_us": 0, 00:22:16.910 "timeout_admin_us": 0, 00:22:16.910 "keep_alive_timeout_ms": 10000, 00:22:16.910 "arbitration_burst": 0, 00:22:16.910 "low_priority_weight": 0, 00:22:16.910 "medium_priority_weight": 0, 00:22:16.910 "high_priority_weight": 0, 00:22:16.910 "nvme_adminq_poll_period_us": 10000, 00:22:16.910 "nvme_ioq_poll_period_us": 0, 00:22:16.910 "io_queue_requests": 512, 00:22:16.910 "delay_cmd_submit": true, 00:22:16.910 "transport_retry_count": 4, 00:22:16.910 "bdev_retry_count": 3, 00:22:16.910 "transport_ack_timeout": 0, 00:22:16.910 "ctrlr_loss_timeout_sec": 0, 00:22:16.910 "reconnect_delay_sec": 0, 00:22:16.910 "fast_io_fail_timeout_sec": 0, 00:22:16.910 "disable_auto_failback": false, 00:22:16.910 "generate_uuids": false, 00:22:16.910 "transport_tos": 0, 00:22:16.910 "nvme_error_stat": false, 00:22:16.910 "rdma_srq_size": 0, 00:22:16.910 "io_path_stat": false, 00:22:16.910 "allow_accel_sequence": false, 00:22:16.910 "rdma_max_cq_size": 0, 00:22:16.910 "rdma_cm_event_timeout_ms": 0, 00:22:16.910 "dhchap_digests": [ 00:22:16.910 "sha256", 00:22:16.910 "sha384", 00:22:16.910 "sha512" 00:22:16.910 ], 00:22:16.910 "dhchap_dhgroups": [ 00:22:16.910 "null", 00:22:16.910 "ffdhe2048", 00:22:16.910 "ffdhe3072", 00:22:16.910 "ffdhe4096", 00:22:16.910 "ffdhe6144", 00:22:16.910 "ffdhe8192" 00:22:16.910 ], 00:22:16.910 "rdma_umr_per_io": false 00:22:16.910 } 00:22:16.910 }, 00:22:16.910 { 00:22:16.910 "method": "bdev_nvme_attach_controller", 00:22:16.910 "params": { 00:22:16.910 "name": "nvme0", 00:22:16.910 "trtype": "TCP", 00:22:16.910 "adrfam": "IPv4", 00:22:16.910 "traddr": "127.0.0.1", 00:22:16.910 "trsvcid": "4420", 00:22:16.910 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:16.910 "prchk_reftag": false, 00:22:16.910 "prchk_guard": false, 00:22:16.910 "ctrlr_loss_timeout_sec": 0, 00:22:16.910 "reconnect_delay_sec": 0, 00:22:16.910 "fast_io_fail_timeout_sec": 0, 00:22:16.910 "psk": "key0", 00:22:16.910 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:16.910 "hdgst": false, 00:22:16.910 "ddgst": false, 00:22:16.910 "multipath": "multipath" 00:22:16.910 } 00:22:16.910 }, 00:22:16.910 { 00:22:16.910 "method": "bdev_nvme_set_hotplug", 00:22:16.910 "params": { 00:22:16.910 "period_us": 100000, 00:22:16.910 "enable": false 00:22:16.910 } 00:22:16.910 }, 00:22:16.910 { 00:22:16.910 "method": "bdev_wait_for_examine" 00:22:16.910 } 00:22:16.910 ] 00:22:16.910 }, 00:22:16.910 { 00:22:16.910 "subsystem": "nbd", 00:22:16.910 "config": [] 00:22:16.910 } 00:22:16.910 ] 00:22:16.910 }' 00:22:16.910 14:03:09 keyring_file -- keyring/file.sh@115 -- # killprocess 86612 00:22:16.910 14:03:09 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 86612 ']' 00:22:16.910 14:03:09 keyring_file -- common/autotest_common.sh@958 -- # kill -0 86612 00:22:16.910 14:03:09 keyring_file -- common/autotest_common.sh@959 -- # uname 00:22:16.910 14:03:09 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:16.910 14:03:09 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86612 00:22:16.910 killing process with pid 86612 00:22:16.910 Received shutdown signal, test time was about 1.000000 seconds 00:22:16.910 00:22:16.910 Latency(us) 00:22:16.910 [2024-12-11T14:03:09.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.910 [2024-12-11T14:03:09.957Z] =================================================================================================================== 00:22:16.910 [2024-12-11T14:03:09.957Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:16.910 14:03:09 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:16.910 14:03:09 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:16.910 14:03:09 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86612' 00:22:16.910 14:03:09 keyring_file -- common/autotest_common.sh@973 -- # kill 86612 00:22:16.910 14:03:09 keyring_file -- common/autotest_common.sh@978 -- # wait 86612 00:22:17.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:17.169 14:03:10 keyring_file -- keyring/file.sh@118 -- # bperfpid=86855 00:22:17.169 14:03:10 keyring_file -- keyring/file.sh@120 -- # waitforlisten 86855 /var/tmp/bperf.sock 00:22:17.169 14:03:10 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 86855 ']' 00:22:17.169 14:03:10 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:22:17.169 "subsystems": [ 00:22:17.169 { 00:22:17.169 "subsystem": "keyring", 00:22:17.169 "config": [ 00:22:17.169 { 00:22:17.169 "method": "keyring_file_add_key", 00:22:17.169 "params": { 00:22:17.169 "name": "key0", 00:22:17.169 "path": "/tmp/tmp.HJiZBdKslf" 00:22:17.169 } 00:22:17.169 }, 00:22:17.169 { 00:22:17.169 "method": "keyring_file_add_key", 00:22:17.169 "params": { 00:22:17.169 "name": "key1", 00:22:17.169 "path": "/tmp/tmp.SzimadEuDU" 00:22:17.169 } 00:22:17.169 } 00:22:17.169 ] 00:22:17.169 }, 00:22:17.169 { 00:22:17.169 "subsystem": "iobuf", 00:22:17.169 "config": [ 00:22:17.169 { 00:22:17.169 "method": "iobuf_set_options", 00:22:17.169 "params": { 00:22:17.169 "small_pool_count": 8192, 00:22:17.169 "large_pool_count": 1024, 00:22:17.169 "small_bufsize": 8192, 00:22:17.169 "large_bufsize": 135168, 00:22:17.169 "enable_numa": false 00:22:17.169 } 00:22:17.169 } 00:22:17.169 ] 00:22:17.169 }, 00:22:17.169 { 00:22:17.169 "subsystem": "sock", 00:22:17.169 "config": [ 00:22:17.169 { 00:22:17.169 "method": "sock_set_default_impl", 00:22:17.169 "params": { 00:22:17.169 "impl_name": "uring" 00:22:17.169 } 00:22:17.169 }, 00:22:17.169 { 00:22:17.169 "method": "sock_impl_set_options", 00:22:17.169 "params": { 00:22:17.169 "impl_name": "ssl", 00:22:17.169 "recv_buf_size": 4096, 00:22:17.169 "send_buf_size": 4096, 00:22:17.169 "enable_recv_pipe": true, 00:22:17.169 "enable_quickack": false, 00:22:17.169 "enable_placement_id": 0, 00:22:17.169 "enable_zerocopy_send_server": true, 00:22:17.169 "enable_zerocopy_send_client": false, 00:22:17.169 "zerocopy_threshold": 0, 00:22:17.169 "tls_version": 0, 00:22:17.169 "enable_ktls": false 00:22:17.169 } 00:22:17.169 }, 00:22:17.169 { 00:22:17.169 "method": "sock_impl_set_options", 00:22:17.169 "params": { 00:22:17.169 "impl_name": "posix", 00:22:17.169 "recv_buf_size": 2097152, 00:22:17.169 "send_buf_size": 2097152, 00:22:17.169 "enable_recv_pipe": true, 00:22:17.169 "enable_quickack": false, 00:22:17.169 "enable_placement_id": 0, 00:22:17.169 "enable_zerocopy_send_server": true, 00:22:17.169 "enable_zerocopy_send_client": false, 00:22:17.169 "zerocopy_threshold": 0, 00:22:17.169 "tls_version": 0, 00:22:17.169 "enable_ktls": false 00:22:17.169 } 00:22:17.169 }, 00:22:17.169 { 00:22:17.169 "method": "sock_impl_set_options", 00:22:17.169 "params": { 00:22:17.169 "impl_name": "uring", 00:22:17.169 "recv_buf_size": 2097152, 00:22:17.169 "send_buf_size": 2097152, 00:22:17.169 "enable_recv_pipe": true, 00:22:17.169 "enable_quickack": false, 00:22:17.169 "enable_placement_id": 0, 00:22:17.169 "enable_zerocopy_send_server": false, 00:22:17.169 "enable_zerocopy_send_client": false, 00:22:17.169 "zerocopy_threshold": 0, 00:22:17.169 "tls_version": 0, 00:22:17.169 "enable_ktls": false 00:22:17.169 } 00:22:17.169 } 00:22:17.169 ] 00:22:17.169 }, 00:22:17.169 { 00:22:17.169 "subsystem": "vmd", 00:22:17.169 "config": [] 00:22:17.169 }, 00:22:17.169 { 00:22:17.169 "subsystem": "accel", 00:22:17.169 "config": [ 00:22:17.169 { 00:22:17.169 "method": "accel_set_options", 00:22:17.169 "params": { 00:22:17.169 "small_cache_size": 128, 00:22:17.169 "large_cache_size": 16, 00:22:17.169 "task_count": 2048, 00:22:17.169 "sequence_count": 2048, 00:22:17.169 "buf_count": 2048 00:22:17.169 } 00:22:17.169 } 00:22:17.169 ] 00:22:17.169 }, 00:22:17.169 { 00:22:17.169 "subsystem": "bdev", 00:22:17.169 "config": [ 00:22:17.169 { 00:22:17.169 "method": "bdev_set_options", 00:22:17.169 "params": { 00:22:17.169 "bdev_io_pool_size": 65535, 00:22:17.169 "bdev_io_cache_size": 256, 00:22:17.169 "bdev_auto_examine": true, 00:22:17.169 "iobuf_small_cache_size": 128, 00:22:17.169 "iobuf_large_cache_size": 16 00:22:17.169 } 00:22:17.169 }, 00:22:17.169 { 00:22:17.169 "method": "bdev_raid_set_options", 00:22:17.169 "params": { 00:22:17.169 "process_window_size_kb": 1024, 00:22:17.169 "process_max_bandwidth_mb_sec": 0 00:22:17.169 } 00:22:17.169 }, 00:22:17.169 { 00:22:17.169 "method": "bdev_iscsi_set_options", 00:22:17.169 "params": { 00:22:17.169 "timeout_sec": 30 00:22:17.169 } 00:22:17.169 }, 00:22:17.169 { 00:22:17.169 "method": "bdev_nvme_set_options", 00:22:17.169 "params": { 00:22:17.173 "action_on_timeout": "none", 00:22:17.173 "timeout_us": 0, 00:22:17.173 "timeout_admin_us": 0, 00:22:17.173 "keep_alive_timeout_ms": 10000, 00:22:17.173 "arbitration_burst": 0, 00:22:17.173 "low_priority_weight": 0, 00:22:17.173 "medium_priority_weight": 0, 00:22:17.173 "high_priority_weight": 0, 00:22:17.173 "nvme_adminq_poll_period_us": 10000, 00:22:17.173 "nvme_ioq_poll_period_us": 0, 00:22:17.173 "io_queue_requests": 512, 00:22:17.173 "delay_cmd_submit": true, 00:22:17.173 "transport_retry_count": 4, 00:22:17.173 "bdev_retry_count": 3, 00:22:17.173 "transport_ack_timeout": 0, 00:22:17.173 "ctrlr_loss_timeout_sec": 0, 00:22:17.173 "reconnect_delay_sec": 0, 00:22:17.173 "fast_io_fail_timeout_sec": 0, 00:22:17.173 "disable_auto_failback": false, 00:22:17.173 "generate_uuids": false, 00:22:17.173 "transport_tos": 0, 00:22:17.173 "nvme_error_stat": false, 00:22:17.173 "rdma_srq_size": 0, 00:22:17.173 "io_path_stat": false, 00:22:17.173 "allow_accel_sequence": false, 00:22:17.173 "rdma_max_cq_size": 0, 00:22:17.173 "rdma_cm_event_timeout_ms": 0, 00:22:17.173 "dhchap_digests": [ 00:22:17.173 "sha256", 00:22:17.173 "sha384", 00:22:17.173 "sha512" 00:22:17.173 ], 00:22:17.173 "dhchap_dhgroups": [ 00:22:17.173 "null", 00:22:17.173 "ffdhe2048", 00:22:17.173 "ffdhe3072", 00:22:17.173 "ffdhe4096", 00:22:17.173 "ffdhe6144", 00:22:17.173 "ffdhe8192" 00:22:17.173 ], 00:22:17.173 "rdma_umr_per_io": false 00:22:17.173 } 00:22:17.173 }, 00:22:17.173 { 00:22:17.173 "method": "bdev_nvme_attach_controller", 00:22:17.173 "params": { 00:22:17.173 "name": "nvme0", 00:22:17.173 "trtype": "TCP", 00:22:17.173 "adrfam": "IPv4", 00:22:17.173 "traddr": "127.0.0.1", 00:22:17.173 "trsvcid": "4420", 00:22:17.173 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:17.173 "prchk_reftag": false, 00:22:17.173 "prchk_guard": false, 00:22:17.173 "ctrlr_loss_timeout_sec": 0, 00:22:17.173 "reconnect_delay_sec": 0, 00:22:17.173 "fast_io_fail_timeout_sec": 0, 00:22:17.173 "psk": "key0", 00:22:17.173 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:17.173 "hdgst": false, 00:22:17.173 "ddgst": false, 00:22:17.173 "multipath": "multipath" 00:22:17.173 } 00:22:17.173 }, 00:22:17.173 { 00:22:17.173 "method": "bdev_nvme_set_hotplug", 00:22:17.173 "params": { 00:22:17.173 "period_us": 100000, 00:22:17.173 "enable": false 00:22:17.173 } 00:22:17.173 }, 00:22:17.173 { 00:22:17.173 "method": "bdev_wait_for_examine" 00:22:17.173 } 00:22:17.173 ] 00:22:17.173 }, 00:22:17.173 { 00:22:17.173 "subsystem": "nbd", 00:22:17.173 "config": [] 00:22:17.173 } 00:22:17.173 ] 00:22:17.173 }' 00:22:17.173 14:03:10 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:17.173 14:03:10 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:22:17.173 14:03:10 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:17.173 14:03:10 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:17.173 14:03:10 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:17.173 14:03:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:17.173 [2024-12-11 14:03:10.072301] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:22:17.173 [2024-12-11 14:03:10.072783] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86855 ] 00:22:17.432 [2024-12-11 14:03:10.220959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.432 [2024-12-11 14:03:10.269962] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.432 [2024-12-11 14:03:10.406102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:17.432 [2024-12-11 14:03:10.464450] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:18.367 14:03:11 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.367 14:03:11 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:22:18.367 14:03:11 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:22:18.367 14:03:11 keyring_file -- keyring/file.sh@121 -- # jq length 00:22:18.367 14:03:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:18.367 14:03:11 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:22:18.367 14:03:11 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:22:18.367 14:03:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:18.367 14:03:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:18.367 14:03:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:18.367 14:03:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:18.367 14:03:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:18.625 14:03:11 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:22:18.625 14:03:11 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:22:18.625 14:03:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:18.625 14:03:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:18.625 14:03:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:18.625 14:03:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:18.625 14:03:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:18.883 14:03:11 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:22:18.883 14:03:11 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:22:18.883 14:03:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:22:18.883 14:03:11 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:22:19.140 14:03:12 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:22:19.140 14:03:12 keyring_file -- keyring/file.sh@1 -- # cleanup 00:22:19.140 14:03:12 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.HJiZBdKslf /tmp/tmp.SzimadEuDU 00:22:19.140 14:03:12 keyring_file -- keyring/file.sh@20 -- # killprocess 86855 00:22:19.140 14:03:12 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 86855 ']' 00:22:19.140 14:03:12 keyring_file -- common/autotest_common.sh@958 -- # kill -0 86855 00:22:19.140 14:03:12 keyring_file -- common/autotest_common.sh@959 -- # uname 00:22:19.140 14:03:12 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:19.140 14:03:12 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86855 00:22:19.398 killing process with pid 86855 00:22:19.398 Received shutdown signal, test time was about 1.000000 seconds 00:22:19.398 00:22:19.398 Latency(us) 00:22:19.398 [2024-12-11T14:03:12.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.398 [2024-12-11T14:03:12.445Z] =================================================================================================================== 00:22:19.398 [2024-12-11T14:03:12.445Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:19.398 14:03:12 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:19.398 14:03:12 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:19.398 14:03:12 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86855' 00:22:19.398 14:03:12 keyring_file -- common/autotest_common.sh@973 -- # kill 86855 00:22:19.398 14:03:12 keyring_file -- common/autotest_common.sh@978 -- # wait 86855 00:22:19.398 14:03:12 keyring_file -- keyring/file.sh@21 -- # killprocess 86602 00:22:19.398 14:03:12 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 86602 ']' 00:22:19.398 14:03:12 keyring_file -- common/autotest_common.sh@958 -- # kill -0 86602 00:22:19.398 14:03:12 keyring_file -- common/autotest_common.sh@959 -- # uname 00:22:19.398 14:03:12 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:19.398 14:03:12 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86602 00:22:19.398 killing process with pid 86602 00:22:19.398 14:03:12 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:19.398 14:03:12 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:19.398 14:03:12 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86602' 00:22:19.398 14:03:12 keyring_file -- common/autotest_common.sh@973 -- # kill 86602 00:22:19.399 14:03:12 keyring_file -- common/autotest_common.sh@978 -- # wait 86602 00:22:19.966 ************************************ 00:22:19.966 END TEST keyring_file 00:22:19.966 ************************************ 00:22:19.966 00:22:19.966 real 0m14.887s 00:22:19.966 user 0m37.779s 00:22:19.966 sys 0m3.007s 00:22:19.966 14:03:12 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:19.966 14:03:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:19.966 14:03:12 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:22:19.966 14:03:12 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:19.966 14:03:12 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:19.966 14:03:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:19.966 14:03:12 -- common/autotest_common.sh@10 -- # set +x 00:22:19.966 ************************************ 00:22:19.966 START TEST keyring_linux 00:22:19.966 ************************************ 00:22:19.966 14:03:12 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:19.966 Joined session keyring: 384906392 00:22:19.966 * Looking for test storage... 00:22:19.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:19.966 14:03:12 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:19.966 14:03:12 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:19.966 14:03:12 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:22:19.966 14:03:12 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@345 -- # : 1 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:19.966 14:03:12 keyring_linux -- scripts/common.sh@368 -- # return 0 00:22:19.966 14:03:12 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:19.966 14:03:12 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:19.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.966 --rc genhtml_branch_coverage=1 00:22:19.966 --rc genhtml_function_coverage=1 00:22:19.966 --rc genhtml_legend=1 00:22:19.966 --rc geninfo_all_blocks=1 00:22:19.966 --rc geninfo_unexecuted_blocks=1 00:22:19.966 00:22:19.966 ' 00:22:19.966 14:03:12 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:19.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.966 --rc genhtml_branch_coverage=1 00:22:19.966 --rc genhtml_function_coverage=1 00:22:19.966 --rc genhtml_legend=1 00:22:19.966 --rc geninfo_all_blocks=1 00:22:19.966 --rc geninfo_unexecuted_blocks=1 00:22:19.966 00:22:19.966 ' 00:22:19.966 14:03:12 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:19.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.966 --rc genhtml_branch_coverage=1 00:22:19.966 --rc genhtml_function_coverage=1 00:22:19.966 --rc genhtml_legend=1 00:22:19.966 --rc geninfo_all_blocks=1 00:22:19.966 --rc geninfo_unexecuted_blocks=1 00:22:19.966 00:22:19.966 ' 00:22:19.966 14:03:12 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:19.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.966 --rc genhtml_branch_coverage=1 00:22:19.966 --rc genhtml_function_coverage=1 00:22:19.966 --rc genhtml_legend=1 00:22:19.966 --rc geninfo_all_blocks=1 00:22:19.966 --rc geninfo_unexecuted_blocks=1 00:22:19.966 00:22:19.966 ' 00:22:19.966 14:03:12 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:19.966 14:03:12 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:19.966 14:03:12 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:22:19.966 14:03:12 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.966 14:03:12 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.966 14:03:12 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.966 14:03:12 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.966 14:03:12 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.966 14:03:12 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.966 14:03:12 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.966 14:03:12 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.966 14:03:12 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.966 14:03:12 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.966 14:03:13 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:22:19.966 14:03:13 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5a2a2f86-afba-4aa3-bf09-a6fac1c39ac5 00:22:19.966 14:03:13 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.966 14:03:13 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.966 14:03:13 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:19.966 14:03:13 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:19.966 14:03:13 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:19.967 14:03:13 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:22:19.967 14:03:13 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.967 14:03:13 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.967 14:03:13 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.967 14:03:13 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.967 14:03:13 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.967 14:03:13 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.967 14:03:13 keyring_linux -- paths/export.sh@5 -- # export PATH 00:22:19.967 14:03:13 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.967 14:03:13 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:22:19.967 14:03:13 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:19.967 14:03:13 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:19.967 14:03:13 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.226 14:03:13 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.226 14:03:13 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.226 14:03:13 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:20.226 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:20.226 14:03:13 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:20.226 14:03:13 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:20.226 14:03:13 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:20.226 14:03:13 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:20.226 14:03:13 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:20.226 14:03:13 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:20.226 14:03:13 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:22:20.226 14:03:13 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:22:20.226 14:03:13 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:22:20.226 14:03:13 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:22:20.226 14:03:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:20.226 14:03:13 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:22:20.226 14:03:13 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:20.226 14:03:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:20.226 14:03:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:22:20.226 14:03:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:20.226 14:03:13 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:20.226 14:03:13 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:22:20.226 14:03:13 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:20.226 14:03:13 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:20.226 14:03:13 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:22:20.226 14:03:13 keyring_linux -- nvmf/common.sh@733 -- # python - 00:22:20.226 14:03:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:22:20.226 /tmp/:spdk-test:key0 00:22:20.226 14:03:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:22:20.226 14:03:13 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:22:20.226 14:03:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:20.226 14:03:13 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:22:20.226 14:03:13 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:20.226 14:03:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:20.226 14:03:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:22:20.226 14:03:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:20.226 14:03:13 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:20.226 14:03:13 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:22:20.226 14:03:13 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:20.226 14:03:13 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:22:20.226 14:03:13 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:22:20.226 14:03:13 keyring_linux -- nvmf/common.sh@733 -- # python - 00:22:20.226 14:03:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:22:20.226 /tmp/:spdk-test:key1 00:22:20.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.226 14:03:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:22:20.226 14:03:13 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=86982 00:22:20.226 14:03:13 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.226 14:03:13 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 86982 00:22:20.226 14:03:13 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 86982 ']' 00:22:20.226 14:03:13 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.226 14:03:13 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:20.226 14:03:13 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.226 14:03:13 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:20.226 14:03:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:20.226 [2024-12-11 14:03:13.193772] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:22:20.226 [2024-12-11 14:03:13.194176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86982 ] 00:22:20.485 [2024-12-11 14:03:13.339661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.485 [2024-12-11 14:03:13.386787] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.485 [2024-12-11 14:03:13.454330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:21.421 14:03:14 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:21.421 14:03:14 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:22:21.421 14:03:14 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:22:21.421 14:03:14 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.421 14:03:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:21.421 [2024-12-11 14:03:14.146823] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.421 null0 00:22:21.421 [2024-12-11 14:03:14.178785] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:21.421 [2024-12-11 14:03:14.179158] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:21.421 14:03:14 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.421 14:03:14 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:22:21.421 883664800 00:22:21.421 14:03:14 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:22:21.421 908023393 00:22:21.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:21.421 14:03:14 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=87000 00:22:21.421 14:03:14 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:22:21.421 14:03:14 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 87000 /var/tmp/bperf.sock 00:22:21.421 14:03:14 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 87000 ']' 00:22:21.421 14:03:14 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:21.421 14:03:14 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:21.421 14:03:14 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:21.421 14:03:14 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:21.421 14:03:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:21.421 [2024-12-11 14:03:14.265457] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:22:21.421 [2024-12-11 14:03:14.265773] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87000 ] 00:22:21.421 [2024-12-11 14:03:14.416334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.681 [2024-12-11 14:03:14.469624] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.681 14:03:14 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:21.681 14:03:14 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:22:21.681 14:03:14 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:22:21.681 14:03:14 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:22:21.940 14:03:14 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:22:21.940 14:03:14 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:22.203 [2024-12-11 14:03:15.047002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:22.203 14:03:15 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:22.203 14:03:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:22.462 [2024-12-11 14:03:15.318346] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:22.462 nvme0n1 00:22:22.462 14:03:15 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:22:22.462 14:03:15 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:22:22.462 14:03:15 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:22.462 14:03:15 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:22.462 14:03:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:22.462 14:03:15 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:22.723 14:03:15 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:22:22.723 14:03:15 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:22.723 14:03:15 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:22:22.723 14:03:15 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:22:22.723 14:03:15 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:22:22.723 14:03:15 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:22.723 14:03:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:22.981 14:03:15 keyring_linux -- keyring/linux.sh@25 -- # sn=883664800 00:22:22.982 14:03:15 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:22:22.982 14:03:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:22.982 14:03:15 keyring_linux -- keyring/linux.sh@26 -- # [[ 883664800 == \8\8\3\6\6\4\8\0\0 ]] 00:22:22.982 14:03:15 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 883664800 00:22:22.982 14:03:15 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:22:22.982 14:03:15 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:23.240 Running I/O for 1 seconds... 00:22:24.177 14714.00 IOPS, 57.48 MiB/s 00:22:24.177 Latency(us) 00:22:24.177 [2024-12-11T14:03:17.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.177 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:24.177 nvme0n1 : 1.01 14705.82 57.44 0.00 0.00 8660.24 7328.12 18111.77 00:22:24.177 [2024-12-11T14:03:17.224Z] =================================================================================================================== 00:22:24.177 [2024-12-11T14:03:17.224Z] Total : 14705.82 57.44 0.00 0.00 8660.24 7328.12 18111.77 00:22:24.177 { 00:22:24.177 "results": [ 00:22:24.177 { 00:22:24.177 "job": "nvme0n1", 00:22:24.177 "core_mask": "0x2", 00:22:24.177 "workload": "randread", 00:22:24.177 "status": "finished", 00:22:24.177 "queue_depth": 128, 00:22:24.177 "io_size": 4096, 00:22:24.177 "runtime": 1.00926, 00:22:24.177 "iops": 14705.82406911995, 00:22:24.177 "mibps": 57.444625269999804, 00:22:24.177 "io_failed": 0, 00:22:24.177 "io_timeout": 0, 00:22:24.177 "avg_latency_us": 8660.23886807708, 00:22:24.177 "min_latency_us": 7328.1163636363635, 00:22:24.177 "max_latency_us": 18111.767272727273 00:22:24.177 } 00:22:24.177 ], 00:22:24.177 "core_count": 1 00:22:24.177 } 00:22:24.177 14:03:17 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:24.177 14:03:17 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:24.435 14:03:17 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:22:24.435 14:03:17 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:22:24.435 14:03:17 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:24.435 14:03:17 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:24.435 14:03:17 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:24.436 14:03:17 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:24.694 14:03:17 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:22:24.694 14:03:17 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:24.694 14:03:17 keyring_linux -- keyring/linux.sh@23 -- # return 00:22:24.694 14:03:17 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:24.694 14:03:17 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:22:24.694 14:03:17 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:24.694 14:03:17 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:22:24.694 14:03:17 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.694 14:03:17 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:22:24.694 14:03:17 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.694 14:03:17 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:24.694 14:03:17 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:24.953 [2024-12-11 14:03:17.913506] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:24.953 [2024-12-11 14:03:17.914188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf3db90 (107): Transport endpoint is not connected 00:22:24.953 [2024-12-11 14:03:17.915165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf3db90 (9): Bad file descriptor 00:22:24.954 [2024-12-11 14:03:17.916161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:22:24.954 [2024-12-11 14:03:17.916186] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:24.954 [2024-12-11 14:03:17.916197] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:22:24.954 [2024-12-11 14:03:17.916208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:22:24.954 request: 00:22:24.954 { 00:22:24.954 "name": "nvme0", 00:22:24.954 "trtype": "tcp", 00:22:24.954 "traddr": "127.0.0.1", 00:22:24.954 "adrfam": "ipv4", 00:22:24.954 "trsvcid": "4420", 00:22:24.954 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:24.954 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:24.954 "prchk_reftag": false, 00:22:24.954 "prchk_guard": false, 00:22:24.954 "hdgst": false, 00:22:24.954 "ddgst": false, 00:22:24.954 "psk": ":spdk-test:key1", 00:22:24.954 "allow_unrecognized_csi": false, 00:22:24.954 "method": "bdev_nvme_attach_controller", 00:22:24.954 "req_id": 1 00:22:24.954 } 00:22:24.954 Got JSON-RPC error response 00:22:24.954 response: 00:22:24.954 { 00:22:24.954 "code": -5, 00:22:24.954 "message": "Input/output error" 00:22:24.954 } 00:22:24.954 14:03:17 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:22:24.954 14:03:17 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:24.954 14:03:17 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:24.954 14:03:17 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:24.954 14:03:17 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:22:24.954 14:03:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:24.954 14:03:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:22:24.954 14:03:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:22:24.954 14:03:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:22:24.954 14:03:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:24.954 14:03:17 keyring_linux -- keyring/linux.sh@33 -- # sn=883664800 00:22:24.954 14:03:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 883664800 00:22:24.954 1 links removed 00:22:24.954 14:03:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:24.954 14:03:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:22:24.954 14:03:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:22:24.954 14:03:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:22:24.954 14:03:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:22:24.954 14:03:17 keyring_linux -- keyring/linux.sh@33 -- # sn=908023393 00:22:24.954 14:03:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 908023393 00:22:24.954 1 links removed 00:22:24.954 14:03:17 keyring_linux -- keyring/linux.sh@41 -- # killprocess 87000 00:22:24.954 14:03:17 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 87000 ']' 00:22:24.954 14:03:17 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 87000 00:22:24.954 14:03:17 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:22:24.954 14:03:17 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:24.954 14:03:17 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87000 00:22:24.954 14:03:17 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:24.954 14:03:17 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:24.954 killing process with pid 87000 00:22:24.954 Received shutdown signal, test time was about 1.000000 seconds 00:22:24.954 00:22:24.954 Latency(us) 00:22:24.954 [2024-12-11T14:03:18.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.954 [2024-12-11T14:03:18.001Z] =================================================================================================================== 00:22:24.954 [2024-12-11T14:03:18.001Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:24.954 14:03:17 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87000' 00:22:24.954 14:03:17 keyring_linux -- common/autotest_common.sh@973 -- # kill 87000 00:22:24.954 14:03:17 keyring_linux -- common/autotest_common.sh@978 -- # wait 87000 00:22:25.213 14:03:18 keyring_linux -- keyring/linux.sh@42 -- # killprocess 86982 00:22:25.213 14:03:18 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 86982 ']' 00:22:25.213 14:03:18 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 86982 00:22:25.213 14:03:18 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:22:25.213 14:03:18 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:25.213 14:03:18 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86982 00:22:25.213 killing process with pid 86982 00:22:25.213 14:03:18 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:25.213 14:03:18 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:25.213 14:03:18 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86982' 00:22:25.213 14:03:18 keyring_linux -- common/autotest_common.sh@973 -- # kill 86982 00:22:25.213 14:03:18 keyring_linux -- common/autotest_common.sh@978 -- # wait 86982 00:22:25.781 00:22:25.781 real 0m5.762s 00:22:25.781 user 0m10.961s 00:22:25.781 sys 0m1.571s 00:22:25.781 14:03:18 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:25.781 ************************************ 00:22:25.781 END TEST keyring_linux 00:22:25.781 ************************************ 00:22:25.781 14:03:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:25.781 14:03:18 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:25.781 14:03:18 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:25.781 14:03:18 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:22:25.781 14:03:18 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:22:25.781 14:03:18 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:22:25.781 14:03:18 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:25.781 14:03:18 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:25.781 14:03:18 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:25.781 14:03:18 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:22:25.781 14:03:18 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:25.781 14:03:18 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:22:25.781 14:03:18 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:25.781 14:03:18 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:25.781 14:03:18 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:22:25.781 14:03:18 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:22:25.781 14:03:18 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:22:25.781 14:03:18 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:22:25.781 14:03:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:25.781 14:03:18 -- common/autotest_common.sh@10 -- # set +x 00:22:25.781 14:03:18 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:22:25.781 14:03:18 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:22:25.781 14:03:18 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:22:25.781 14:03:18 -- common/autotest_common.sh@10 -- # set +x 00:22:27.686 INFO: APP EXITING 00:22:27.686 INFO: killing all VMs 00:22:27.686 INFO: killing vhost app 00:22:27.686 INFO: EXIT DONE 00:22:28.253 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:28.253 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:28.253 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:29.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:29.190 Cleaning 00:22:29.190 Removing: /var/run/dpdk/spdk0/config 00:22:29.190 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:29.190 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:29.190 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:29.190 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:29.190 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:29.190 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:29.190 Removing: /var/run/dpdk/spdk1/config 00:22:29.190 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:29.190 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:29.190 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:29.190 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:29.190 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:29.190 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:29.190 Removing: /var/run/dpdk/spdk2/config 00:22:29.190 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:29.190 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:29.190 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:29.190 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:29.190 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:29.190 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:29.190 Removing: /var/run/dpdk/spdk3/config 00:22:29.190 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:29.190 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:29.190 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:29.190 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:29.190 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:29.190 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:29.190 Removing: /var/run/dpdk/spdk4/config 00:22:29.190 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:29.190 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:29.190 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:29.190 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:29.190 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:29.190 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:29.190 Removing: /dev/shm/nvmf_trace.0 00:22:29.190 Removing: /dev/shm/spdk_tgt_trace.pid57999 00:22:29.190 Removing: /var/run/dpdk/spdk0 00:22:29.190 Removing: /var/run/dpdk/spdk1 00:22:29.190 Removing: /var/run/dpdk/spdk2 00:22:29.190 Removing: /var/run/dpdk/spdk3 00:22:29.190 Removing: /var/run/dpdk/spdk4 00:22:29.190 Removing: /var/run/dpdk/spdk_pid57846 00:22:29.190 Removing: /var/run/dpdk/spdk_pid57999 00:22:29.190 Removing: /var/run/dpdk/spdk_pid58205 00:22:29.190 Removing: /var/run/dpdk/spdk_pid58292 00:22:29.190 Removing: /var/run/dpdk/spdk_pid58319 00:22:29.190 Removing: /var/run/dpdk/spdk_pid58429 00:22:29.190 Removing: /var/run/dpdk/spdk_pid58439 00:22:29.190 Removing: /var/run/dpdk/spdk_pid58579 00:22:29.190 Removing: /var/run/dpdk/spdk_pid58774 00:22:29.190 Removing: /var/run/dpdk/spdk_pid58928 00:22:29.190 Removing: /var/run/dpdk/spdk_pid59001 00:22:29.190 Removing: /var/run/dpdk/spdk_pid59083 00:22:29.190 Removing: /var/run/dpdk/spdk_pid59169 00:22:29.190 Removing: /var/run/dpdk/spdk_pid59246 00:22:29.190 Removing: /var/run/dpdk/spdk_pid59285 00:22:29.190 Removing: /var/run/dpdk/spdk_pid59315 00:22:29.190 Removing: /var/run/dpdk/spdk_pid59390 00:22:29.190 Removing: /var/run/dpdk/spdk_pid59473 00:22:29.190 Removing: /var/run/dpdk/spdk_pid59912 00:22:29.190 Removing: /var/run/dpdk/spdk_pid59956 00:22:29.190 Removing: /var/run/dpdk/spdk_pid59996 00:22:29.190 Removing: /var/run/dpdk/spdk_pid60008 00:22:29.190 Removing: /var/run/dpdk/spdk_pid60075 00:22:29.190 Removing: /var/run/dpdk/spdk_pid60084 00:22:29.190 Removing: /var/run/dpdk/spdk_pid60151 00:22:29.190 Removing: /var/run/dpdk/spdk_pid60167 00:22:29.190 Removing: /var/run/dpdk/spdk_pid60212 00:22:29.190 Removing: /var/run/dpdk/spdk_pid60223 00:22:29.190 Removing: /var/run/dpdk/spdk_pid60268 00:22:29.190 Removing: /var/run/dpdk/spdk_pid60279 00:22:29.190 Removing: /var/run/dpdk/spdk_pid60415 00:22:29.190 Removing: /var/run/dpdk/spdk_pid60445 00:22:29.190 Removing: /var/run/dpdk/spdk_pid60527 00:22:29.190 Removing: /var/run/dpdk/spdk_pid60859 00:22:29.190 Removing: /var/run/dpdk/spdk_pid60877 00:22:29.190 Removing: /var/run/dpdk/spdk_pid60908 00:22:29.191 Removing: /var/run/dpdk/spdk_pid60921 00:22:29.191 Removing: /var/run/dpdk/spdk_pid60937 00:22:29.191 Removing: /var/run/dpdk/spdk_pid60961 00:22:29.191 Removing: /var/run/dpdk/spdk_pid60975 00:22:29.191 Removing: /var/run/dpdk/spdk_pid60996 00:22:29.191 Removing: /var/run/dpdk/spdk_pid61015 00:22:29.191 Removing: /var/run/dpdk/spdk_pid61034 00:22:29.191 Removing: /var/run/dpdk/spdk_pid61044 00:22:29.191 Removing: /var/run/dpdk/spdk_pid61068 00:22:29.191 Removing: /var/run/dpdk/spdk_pid61082 00:22:29.191 Removing: /var/run/dpdk/spdk_pid61103 00:22:29.191 Removing: /var/run/dpdk/spdk_pid61122 00:22:29.191 Removing: /var/run/dpdk/spdk_pid61130 00:22:29.191 Removing: /var/run/dpdk/spdk_pid61151 00:22:29.191 Removing: /var/run/dpdk/spdk_pid61170 00:22:29.191 Removing: /var/run/dpdk/spdk_pid61189 00:22:29.191 Removing: /var/run/dpdk/spdk_pid61199 00:22:29.191 Removing: /var/run/dpdk/spdk_pid61237 00:22:29.191 Removing: /var/run/dpdk/spdk_pid61256 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61280 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61352 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61386 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61390 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61424 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61435 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61441 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61490 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61498 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61532 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61538 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61555 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61559 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61574 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61583 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61593 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61603 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61631 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61663 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61667 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61701 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61705 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61718 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61759 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61770 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61797 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61804 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61816 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61819 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61832 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61840 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61847 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61855 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61937 00:22:29.450 Removing: /var/run/dpdk/spdk_pid61984 00:22:29.450 Removing: /var/run/dpdk/spdk_pid62102 00:22:29.450 Removing: /var/run/dpdk/spdk_pid62136 00:22:29.450 Removing: /var/run/dpdk/spdk_pid62181 00:22:29.450 Removing: /var/run/dpdk/spdk_pid62195 00:22:29.450 Removing: /var/run/dpdk/spdk_pid62212 00:22:29.450 Removing: /var/run/dpdk/spdk_pid62232 00:22:29.450 Removing: /var/run/dpdk/spdk_pid62269 00:22:29.450 Removing: /var/run/dpdk/spdk_pid62279 00:22:29.450 Removing: /var/run/dpdk/spdk_pid62357 00:22:29.450 Removing: /var/run/dpdk/spdk_pid62384 00:22:29.450 Removing: /var/run/dpdk/spdk_pid62428 00:22:29.450 Removing: /var/run/dpdk/spdk_pid62504 00:22:29.450 Removing: /var/run/dpdk/spdk_pid62573 00:22:29.450 Removing: /var/run/dpdk/spdk_pid62602 00:22:29.450 Removing: /var/run/dpdk/spdk_pid62702 00:22:29.450 Removing: /var/run/dpdk/spdk_pid62749 00:22:29.450 Removing: /var/run/dpdk/spdk_pid62787 00:22:29.450 Removing: /var/run/dpdk/spdk_pid63014 00:22:29.450 Removing: /var/run/dpdk/spdk_pid63111 00:22:29.450 Removing: /var/run/dpdk/spdk_pid63140 00:22:29.450 Removing: /var/run/dpdk/spdk_pid63169 00:22:29.450 Removing: /var/run/dpdk/spdk_pid63203 00:22:29.450 Removing: /var/run/dpdk/spdk_pid63236 00:22:29.450 Removing: /var/run/dpdk/spdk_pid63275 00:22:29.450 Removing: /var/run/dpdk/spdk_pid63307 00:22:29.450 Removing: /var/run/dpdk/spdk_pid63710 00:22:29.450 Removing: /var/run/dpdk/spdk_pid63752 00:22:29.450 Removing: /var/run/dpdk/spdk_pid64088 00:22:29.450 Removing: /var/run/dpdk/spdk_pid64561 00:22:29.450 Removing: /var/run/dpdk/spdk_pid64842 00:22:29.450 Removing: /var/run/dpdk/spdk_pid65677 00:22:29.450 Removing: /var/run/dpdk/spdk_pid66583 00:22:29.450 Removing: /var/run/dpdk/spdk_pid66706 00:22:29.450 Removing: /var/run/dpdk/spdk_pid66768 00:22:29.450 Removing: /var/run/dpdk/spdk_pid68169 00:22:29.450 Removing: /var/run/dpdk/spdk_pid68472 00:22:29.450 Removing: /var/run/dpdk/spdk_pid72228 00:22:29.450 Removing: /var/run/dpdk/spdk_pid72593 00:22:29.450 Removing: /var/run/dpdk/spdk_pid72702 00:22:29.450 Removing: /var/run/dpdk/spdk_pid72829 00:22:29.450 Removing: /var/run/dpdk/spdk_pid72850 00:22:29.450 Removing: /var/run/dpdk/spdk_pid72884 00:22:29.450 Removing: /var/run/dpdk/spdk_pid72905 00:22:29.450 Removing: /var/run/dpdk/spdk_pid73002 00:22:29.450 Removing: /var/run/dpdk/spdk_pid73128 00:22:29.450 Removing: /var/run/dpdk/spdk_pid73290 00:22:29.710 Removing: /var/run/dpdk/spdk_pid73378 00:22:29.710 Removing: /var/run/dpdk/spdk_pid73572 00:22:29.710 Removing: /var/run/dpdk/spdk_pid73640 00:22:29.710 Removing: /var/run/dpdk/spdk_pid73733 00:22:29.710 Removing: /var/run/dpdk/spdk_pid74103 00:22:29.710 Removing: /var/run/dpdk/spdk_pid74518 00:22:29.710 Removing: /var/run/dpdk/spdk_pid74519 00:22:29.710 Removing: /var/run/dpdk/spdk_pid74520 00:22:29.710 Removing: /var/run/dpdk/spdk_pid74780 00:22:29.710 Removing: /var/run/dpdk/spdk_pid75053 00:22:29.710 Removing: /var/run/dpdk/spdk_pid75441 00:22:29.710 Removing: /var/run/dpdk/spdk_pid75449 00:22:29.710 Removing: /var/run/dpdk/spdk_pid75768 00:22:29.710 Removing: /var/run/dpdk/spdk_pid75793 00:22:29.710 Removing: /var/run/dpdk/spdk_pid75807 00:22:29.710 Removing: /var/run/dpdk/spdk_pid75838 00:22:29.710 Removing: /var/run/dpdk/spdk_pid75843 00:22:29.710 Removing: /var/run/dpdk/spdk_pid76199 00:22:29.710 Removing: /var/run/dpdk/spdk_pid76248 00:22:29.710 Removing: /var/run/dpdk/spdk_pid76580 00:22:29.710 Removing: /var/run/dpdk/spdk_pid76783 00:22:29.710 Removing: /var/run/dpdk/spdk_pid77214 00:22:29.710 Removing: /var/run/dpdk/spdk_pid77771 00:22:29.710 Removing: /var/run/dpdk/spdk_pid78655 00:22:29.710 Removing: /var/run/dpdk/spdk_pid79298 00:22:29.710 Removing: /var/run/dpdk/spdk_pid79306 00:22:29.710 Removing: /var/run/dpdk/spdk_pid81343 00:22:29.710 Removing: /var/run/dpdk/spdk_pid81409 00:22:29.710 Removing: /var/run/dpdk/spdk_pid81456 00:22:29.710 Removing: /var/run/dpdk/spdk_pid81523 00:22:29.710 Removing: /var/run/dpdk/spdk_pid81622 00:22:29.710 Removing: /var/run/dpdk/spdk_pid81675 00:22:29.710 Removing: /var/run/dpdk/spdk_pid81723 00:22:29.710 Removing: /var/run/dpdk/spdk_pid81776 00:22:29.710 Removing: /var/run/dpdk/spdk_pid82135 00:22:29.710 Removing: /var/run/dpdk/spdk_pid83348 00:22:29.710 Removing: /var/run/dpdk/spdk_pid83500 00:22:29.710 Removing: /var/run/dpdk/spdk_pid83743 00:22:29.710 Removing: /var/run/dpdk/spdk_pid84339 00:22:29.710 Removing: /var/run/dpdk/spdk_pid84501 00:22:29.710 Removing: /var/run/dpdk/spdk_pid84662 00:22:29.710 Removing: /var/run/dpdk/spdk_pid84760 00:22:29.710 Removing: /var/run/dpdk/spdk_pid84924 00:22:29.710 Removing: /var/run/dpdk/spdk_pid85033 00:22:29.710 Removing: /var/run/dpdk/spdk_pid85743 00:22:29.710 Removing: /var/run/dpdk/spdk_pid85774 00:22:29.710 Removing: /var/run/dpdk/spdk_pid85809 00:22:29.710 Removing: /var/run/dpdk/spdk_pid86066 00:22:29.710 Removing: /var/run/dpdk/spdk_pid86100 00:22:29.710 Removing: /var/run/dpdk/spdk_pid86131 00:22:29.710 Removing: /var/run/dpdk/spdk_pid86602 00:22:29.710 Removing: /var/run/dpdk/spdk_pid86612 00:22:29.710 Removing: /var/run/dpdk/spdk_pid86855 00:22:29.710 Removing: /var/run/dpdk/spdk_pid86982 00:22:29.710 Removing: /var/run/dpdk/spdk_pid87000 00:22:29.710 Clean 00:22:29.710 14:03:22 -- common/autotest_common.sh@1453 -- # return 0 00:22:29.710 14:03:22 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:22:29.710 14:03:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:29.710 14:03:22 -- common/autotest_common.sh@10 -- # set +x 00:22:29.969 14:03:22 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:22:29.969 14:03:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:29.969 14:03:22 -- common/autotest_common.sh@10 -- # set +x 00:22:29.969 14:03:22 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:29.969 14:03:22 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:29.969 14:03:22 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:29.969 14:03:22 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:22:29.969 14:03:22 -- spdk/autotest.sh@398 -- # hostname 00:22:29.969 14:03:22 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:30.228 geninfo: WARNING: invalid characters removed from testname! 00:22:56.772 14:03:46 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:57.030 14:03:50 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:00.317 14:03:52 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:02.863 14:03:55 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:05.397 14:03:58 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:07.934 14:04:00 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:10.467 14:04:03 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:10.467 14:04:03 -- spdk/autorun.sh@1 -- $ timing_finish 00:23:10.467 14:04:03 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:23:10.467 14:04:03 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:10.467 14:04:03 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:23:10.467 14:04:03 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:10.726 + [[ -n 5261 ]] 00:23:10.726 + sudo kill 5261 00:23:10.735 [Pipeline] } 00:23:10.750 [Pipeline] // timeout 00:23:10.756 [Pipeline] } 00:23:10.770 [Pipeline] // stage 00:23:10.775 [Pipeline] } 00:23:10.789 [Pipeline] // catchError 00:23:10.798 [Pipeline] stage 00:23:10.801 [Pipeline] { (Stop VM) 00:23:10.814 [Pipeline] sh 00:23:11.094 + vagrant halt 00:23:15.283 ==> default: Halting domain... 00:23:20.561 [Pipeline] sh 00:23:20.839 + vagrant destroy -f 00:23:24.127 ==> default: Removing domain... 00:23:24.138 [Pipeline] sh 00:23:24.416 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:23:24.424 [Pipeline] } 00:23:24.438 [Pipeline] // stage 00:23:24.442 [Pipeline] } 00:23:24.455 [Pipeline] // dir 00:23:24.460 [Pipeline] } 00:23:24.472 [Pipeline] // wrap 00:23:24.478 [Pipeline] } 00:23:24.489 [Pipeline] // catchError 00:23:24.497 [Pipeline] stage 00:23:24.499 [Pipeline] { (Epilogue) 00:23:24.510 [Pipeline] sh 00:23:24.790 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:32.918 [Pipeline] catchError 00:23:32.920 [Pipeline] { 00:23:32.934 [Pipeline] sh 00:23:33.216 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:33.475 Artifacts sizes are good 00:23:33.484 [Pipeline] } 00:23:33.498 [Pipeline] // catchError 00:23:33.508 [Pipeline] archiveArtifacts 00:23:33.515 Archiving artifacts 00:23:33.641 [Pipeline] cleanWs 00:23:33.652 [WS-CLEANUP] Deleting project workspace... 00:23:33.652 [WS-CLEANUP] Deferred wipeout is used... 00:23:33.659 [WS-CLEANUP] done 00:23:33.660 [Pipeline] } 00:23:33.675 [Pipeline] // stage 00:23:33.681 [Pipeline] } 00:23:33.695 [Pipeline] // node 00:23:33.701 [Pipeline] End of Pipeline 00:23:33.745 Finished: SUCCESS